Page 1,210«..1020..1,2091,2101,2111,212..1,2201,230..»

Rogers: A life informed by the columnists – The Aspen Times

Jim Murray was my favorite. I came of newspaper reading age in suburban Los Angeles. The sports section was my entry. Murrays column came first as I grew up and again in Santa Barbara, hungry for my favorite section of The LA Times at the fire station, also prized by most crewmates.

I learned a few years later how he did it, his true heft on the staff, from an old alcoholic reporter at the mountain town paper I edited in northern California. My colleague had reported for The Times back in the day, sharing a Pulitzer for coverage of the 1965 Watts riots.

Murray was such a star he pioneered remote work, from the beach. My source, who I should be ashamed to say I plied with gin and tonics at the Capitol Club across the street from the paper, told me Murray would speak into a recorder and at the end of the day drop it off for an assistant to type up, pitch perfect every time.

I came to love Mike Royko, the irascible columnist in Chicago who often spoke through imaginary working class slob Slats Grobnik, employing fictional conversations to lay down uncomfortable truths. Royko was tough, totally unfair, hilarious.

But mostly I was into the essays in Outside before they got caught up with making money. Tim Cahill and David Quammen. I wanted to write like them. And like the writers in Newsweek and Time, the news articles columns at root, reporting with a voice.

I appreciated the caustic wit of Ann Coulter, whew, and Maureen Dowds one-liners. But I gravitated more to David Brooks and Thomas Friedman, more inclined to teach than excoriate.

For a time the king was Mitch Albom, while I was city editor in Holland, Mich., and then news editor in Benton Harbor-St. Joseph an hour down the shore.

Albom, sports columnist of the year from Detroit every year before making his millions in sappy novels, had endless imitators throughout the state and into Indiana, though that ended around the Illinois state line. Something to do with FIPs, the I always meaning that other state between those two. I learned this from my wifes family in South Bend, Indiana, and editing The Daily Gazette in Sterling-Rock Falls, Illinois prairie country where directions to a town 14 miles away once were ride due south for five miles and turn west at the tree. The only tree.

I always liked Charles Krauthammer of The Washington Post, too, though I didnt realize it until listening during commutes between Grass Valley, California, and Lake Tahoe to the audio version of commentaries his son had finished gathering posthumously into a book. Different politics, and we didnt share world views or a taste for city life. But I dug his pieces. They made me think, mainly about why exactly we disagreed when he seemed to make so much sense. Heres a worthy exercise I wish a lot more of us indulged in.

I was sad to read Paul Menter had had enough, and last week would be his last column. I dont care that he wrote in the wrong paper. I read them both, happy for the chance.

As with Krauthammer, we didnt have to agree. We probably dont on anything. Same with the Red Ant in my paper. He grumped and she stings while treading the same governments always wrong path. We all have our fixations.

Some moan as if ready to leave, though you know they wont, moaning being easier, more satisfying. Some laser in on topics energy, local government, service, nature, life, wine, relationships, and who doesnt crave more on that last? Myself, though, I tend to go most for the one-off pieces with heart or heat in them.

Of the regulars, I resonated with Lo Semples recent take on Aspens soul, something he expressed as up to the individual to generate rather than dwelling on done-to helplessness.

Crochety is fine. Seen-it-all Tony Vagneur somehow accomplishes this with wry good humor. Sure its all gone to hell n back, but there are horses to catch, grandsons to watch, and life aint stopping, neither, just because age finally snuck up on Tonys generation, too. Go figure.

My favorite of all isnt even a person I can name. Some crap like Paul E. Ana. Get it? A new colleague explained the tradition with a shrug when I began. I probably rolled my eyes.

But then. Hey, this sentence is pretty good. Sos this one, and this one. A laugh, an anecdote, wit, charm, a point. I left completely converted, a new fan. Its really, really good, week in and week out. Occasionally sharp, never sour.

One of the most searing single pieces for me ever was Andrew Parrotts raw personal look at the burdens of people who must somehow negotiate living with the next psychiatric break always looming, as if life werent hard enough already.

Speaking of searing and sharp, I dont think Ive encountered a more barbed wit than Meredith Carrolls. Wicked, funnier than Royko, a handle for phrasing like Murray, at least as bright as Krauthammer. She was a regular for The Times and for a while had a column in The Denver Post, before their long retreat began.

I read her in Vail, and but for being a such cheap bastard might have run her there. Maybe a shocker, though: Vail has about as much enthusiasm for the musings of an Aspen local as Aspen would have for another of my favorites, Richard Carnes. One of his recents, sigh: At least were not Aspen.

Our petty little rivalries rob us of so much.

Aspen Times Editor Don Rogers can be reached at drogers@aspentimes.com

More:
Rogers: A life informed by the columnists - The Aspen Times

Read More..

What are the threats and promises of AI? – Texas Public Radio

When machines can think, will humanity become obsolete? Will the sentient constructs born from the reckless invention of shortsighted people lead to our own destruction or enslavement? Is it too late to turn away from the artificial intelligence advancements that are already in place today? How can the risks of learning machines be eliminated before its too late?

These questions may seem like the plots of science fiction, but today these are the real problems that we are facing as a species.

Artificial intelligence is advancing at an astonishing rate, and there is the potential for AI to soon become so intelligent, that it surpasses human intelligence. This is known as "superintelligence" and presents what is known as the "AI control problem." If AI were to become more intelligent than humans, it could potentially pose a threat to our existence. For example, an AI could decide that humans are a threat to its own existence and take steps to eliminate us. Or a badly informed or poorly instructed AI agent could follow its mission goals in an extreme way that would threaten the human population.

But even without super artificial intelligence there is the potential for AI to be used by bad actor humans for malicious purposes. AI could be used to create autonomous weapons systems that could kill without human intervention. It could also be used to manipulate people or spread misinformation.

There is the potential for AI to lead to mass unemployment. As AI becomes more sophisticated, it is likely to automate many tasks that are currently done by humans. This could lead to widespread job displacement and economic disruption.

It is also important to note that there are many people who believe that the benefits of AI outweigh the risks. However, it is important to be aware of the potential dangers of AI and to take steps to mitigate them. For instance, ethical guidelines for the development and use of AI should be put in place. These guidelines should ensure that AI systems are fair, unbiased, and safe. But who polices these ethical guidelines and will international competition for the most powerful AI system force the abandonment of those guidelines?

More resources need to be dedicated to research on AI safety. This research should focus on developing techniques for preventing AI from becoming a threat to humanity. But how much risk can we tolerate? Its unlikely that any artificial superintelligence will be 100 percent safe.

Guest:

Roman Yampolskiy is an associate professor in the Department of Computer Science and Engineering at the University of Louisville J.B. Speed School of Engineering. His research expertise is in understanding the limitations, safety concerns, and controllability of artificial intelligenceincluding developing a plan for conceivable adversarial scenarios between humans and AI.

"The Source" is a live call-in program airing Mondays through Thursdays from 12-1 p.m. Leave a message before the program at (210) 615-8982. During the live show, call 833-877-8255 or email thesource@tpr.org.

*This interview will be recorded on Thursday, June 8.

Read more:

What are the threats and promises of AI? - Texas Public Radio

Read More..

Ark Invest’s Cathie Wood Is Betting Big On AI With These 4 Stocks Including One That Could Skyrocket 750% – Yahoo Finance

Ark Invests Cathie Wood is known for investing in disruptive innovation. The super investor has placed big bets on artificial intelligence (AI), which is widely regarded as one of the most disruptive and transformative technologies today.

Speaking of AI stocks, its hard to ignore what Nvidia Corp. (NASDAQ: NVDA) has been doing. Shares of the chipmaking giant have surged 165% so far this year, and the company crossed $1 trillion in valuation at one point.

Ark Invests flagship fund Ark Innovation ETF (NYSEARCA: ARKK) exited its position in Nvidia in January, but some of its other exchange-traded funds (ETFs) still have positions in the chipmaker.

In a recent interview with Bloomberg Television, Wood said that Nvidia will do well over time. But she sees a new group of stocks that will benefit from the foundation that Nvidia has laid.

The keyword is software.

In our view, for every dollar of hardware that Nvidia sells, software providers, SaaS [software as a service] providers will generate $8 in revenue, she said. So we are looking to the software providers who are actually right now where Nvidia was when we first bought it.

The super investor then named three software companies she believes will thrive because of AI. Heres a look at the trio and another company she calls the biggest artificial intelligence play.

Don't miss:

UiPath is a robotic process automation software company that provides automation solutions for businesses. Its AI-powered UiPath Business Automation Platform is capable of understanding, automating and operating end-to-end processes.

In the first quarter of 2023, the companys revenue grew 18% year over year to $289.6 million. Notably, its dollar-based net retention rate was 122%.

Story continues

The stock has surged 47% year to date, but it hasnt always been a hot commodity: In 2022, UiPath shares plunged 70%.

Woods Ark Innovation ETF owns 28,865,375 shares of UiPath. With the position valued at $517.28 million, UiPath is the fourth-largest holding at Ark.

Twilios cloud communications platform allows businesses to develop and integrate various communication channels into their applications. Its application programming interfaces enable developers to incorporate voice, messaging and video seamlessly, helping companies enhance customer engagement.

In the first quarter, Twilio surpassed 300,000 active customer accounts. Meanwhile, revenue rose 15% year over year to $1.01 billion.

In its latest earnings conference call, Twilio Co-Founder and CEO Jeff Lawson said he believes that artificial intelligence will be a material accelerant over time for Twilios business.

Ark Innovation ETF holds 4,680,705 shares of Twilio, a stake with a market value of $303.54 million.

Woods flagship fund also owns $301.07 million worth of telemedicine company Teladoc Health.

The companys platform connects patients with healthcare professionals through video, phone and messaging.

At the peak of the COVID-19 pandemic, when in-person nonemergency medical care was temporarily halted, the demand for telehealth services skyrocketed.

In 2020, Teladoc attracted a lot of investor attention as its revenue shot up 98%.

While the pandemic is largely in the rearview mirror, the company continues to expand its business. Teladocs first-quarter revenue showed an 11% increase year over year.

The stock, however, wasnt able to maintain the upward momentum. Trading at $24.30 per share, Teladoc is down more than 90% from its all-time high reached in February 2021.

Woods biggest bet in the AI arena is a company that isnt known for being an AI stock Tesla Inc. (NASDAQ: TSLA).

We talk about Tesla all the time, it actually is the biggest artificial intelligence play, she said.

The reason has to do with the electric car companys autonomous driving technology.

Tesla is Arks largest holding with an 11.81% weighting.

Wood expects autonomous taxi platforms to deliver $8 trillion to $10 trillion in revenue globally in 2030 from almost zero right now.

And because of Teslas capabilities on that front, the booming autonomous taxi market could take its share price to a whole new level.

We believe that in five years, 2027, it will be a $2,000 stock if our research is correct, she said.

Considering that Tesla shares trade at around $235 right now, Woods price target implies a potential upside of over 750%.

Investing in disruptive innovation can be very lucrative but sometimes it can feel like a roller coaster. For instance, while Tesla shares have more than doubled year to date, they are still down over 40% from their peak in November 2021.

If you dont like that kind of uncertainty, you might want to look into slow-changing industries that provide considerable cash returns to investors such as those catering to basic human needs like food and shelter. For those seeking to generate passive income without the volatility associated with publicly traded stocks, there are avenues to invest in these essential service businesses through the private market.

Don't miss:

Don't miss real-time alerts on your stocks - join Benzinga Pro for free! Try the tool that will help you invest smarter, faster, and better.

This article Ark Invest's Cathie Wood Is Betting Big On AI With These 4 Stocks Including One That Could Skyrocket 750% originally appeared on Benzinga.com

.

2023 Benzinga.com. Benzinga does not provide investment advice. All rights reserved.

Read the rest here:

Ark Invest's Cathie Wood Is Betting Big On AI With These 4 Stocks Including One That Could Skyrocket 750% - Yahoo Finance

Read More..

What might be the economic impact of AI tools like ChatGPT? – Economics Observatory

From checking for typos and recalling facts through to writing poems and creating art, artificial intelligence (AI) systems can now do tasks it would have been hard to imagine them being capable of even a few years ago.

These new generative AI systems, like ChatGPT, DALL-E2 and OpenArt, have exploded in popularity over recent months. Recent data suggest that ChatGPT has over 100 million users and OpenAI (owner and developer of ChatGPT) receives approximately one billion visitors to its website each month. Analysis from Swiss bank UBS indicates it is the fastest growing consumer app in history.

While technology trends like web3 and the metaverse have attracted plenty of investment and media attention, whether they will have much real-world impact is still unclear. Generative AI seems to be different.

There is broad consensus among technology researchers and commentators that it is a highly significant development at least as significant as the smartphone; probably as significant as the web; and perhaps as significant as electricity. Some go further still, claiming that it is the precursor to machine super-intelligence, and therefore represents an existential risk to humanity, which must be mitigated urgently.

This might seem surprising, given that AI systems have been part of UK citizens daily lives for more than a decade. Whether or not we are aware of it, with their ability to detect patterns in historic data, AI systems have been ranking our social media feeds, determining what digital ads we are shown, and recommending films we might like to watch on streaming services.

So, what is different about generative AI? To paraphrase technology analyst and investor Benedict Evans, its novelty lies in running the pattern-matching machine in reverse. Instead of identifying existing examples that fit a given pattern, it generates new examples of the pattern. The output of generative AI systems is anything that can exist as a digital file text, imagery, audio or video. The result is that an unprecedented super-proliferation of content is underway.

ChatGPT one of the generative AI systems that has burst onto the scene is an example of a large language model (LLM). Here, we focus on the economic implications of such systems.

LLMs produce text outputs in response to natural language instructions from a user (known as prompts). Trained on vast corpuses of text from books and the web, LLMs work by making iterative statistical predictions about what word should come next in a sequence. As a result, they can be used to produce articles, essays, computer code, stories, poems, song lyrics, messages, letters, political speeches and eulogies that are more or less indistinguishable from those written by humans.

The best-known LLMs are those developed by Silicon Valley company OpenAI, largely thanks to the popularity of its consumer-facing application ChatGPT. Released in November 2022, it is underpinned by OpenAIs GPT-3.5 model (premium subscribers can use the even more powerful GPT-4).

Other examples of LLMs include Googles LaMDA, accessible to users of its ChatGPT-like beta product Bard, Metas LLaMA and Anthropics Claude. Microsoft, meanwhile, has invested more than $10 billion in OpenAI and integrated GPT-3.5 into its search engine Bing. And well-capitalised start-up firms are in the process of productising LLMs for specific use-cases, including copywriting and the drafting of legal contracts.

Once they have got over marvelling at its ingenuity, ChatGPT users often find themselves irritated or enraged by its shortcomings. Partly because of its user interface, many instinctively expect it to work like a search engine, retrieving information from a database and presenting results with a high degree of factual accuracy. But this is not what LLMs are designed to do.

Instead, GPT-3.5s probabilistic approach frequently produces what have been called hallucinations factoids, non-existent URLs and fabricated academic references that are all the more hazardous because they are so plausibly articulated. A more banal frustration is the need to copy-and-paste ChatGPTs outputs into other software to make use of it. With this fact-checking and administrative burden, ChatGPT users could be forgiven for expressing scepticism at the idea that LLMs are poised to replace 300 million jobs.

But what this neglects to consider is that LLMs can also be used programmatically, via application programming interfaces (known as APIs). As an experiment, Ankur Shah and I built a database of information about UK insurance products. We then wrote a prompt for an insurance product review article and programmed a simple system to populate the prompt with the product data, send it to OpenAIs API, and push the LLMs output directly into a web content management system.

We were able to publish hundreds of online review articles like this one in less than an hour, at a cost of around $7. Completing the same project with human freelance writers would have taken several months, and cost closer to $70,000. Further, including real product data in the prompts pre-empted the LLM hallucinating incorrect cover limits or imaginary policy features.

So, if used correctly and with careful prompts, LLMs like ChatGPT could indeed change the way in which certain jobs are done. In light of this practical experience, it is plausible that LLMs will mean disruption for writers most acutely in fields like content marketing, where subject matter expertise and a distinctive authorial voice are less important than in journalism or fiction.

The same goes for customer services. Chatbots powered by LLMs and trained on domain-specific data are a major upgrade on their predecessors contrast a GPT4-powered bot like Intercoms Fin, for example, with Avivas clunky online assistant. The same bots can be connected to speech APIs, opening the potential for contact centres to be fully automated.

For organisations able to launch these systems, this would mean material reductions in operating costs. For customers, it would mean an end to queuing for web-chat or telephone support, since AI systems can handle hundreds of interactions simultaneously. But for front-line customer services workers, the prospect will seem rather less utopian.

These efficiency gains and operating cost-savings ought to have a favourable impact on productivity in some industries. This is especially true in sectors like financial services, telecoms, media and education. But it does not necessarily follow that higher sectoral productivity will lead to productivity improvements across the whole economy.

Indeed, despite widespread adoption of the previous generation of digital technologies, productivity growth has stagnated since the global financial crisis of 2007-09, to the continuing puzzlement of economists.

It could be that we have been living in an unproductive bubble, or that most organisations have not yet worked out how they can use mobile apps or big data analytics to become significantly more productive.

Whatever the reason, the implication is that we should not take for granted the idea that LLM-enabled cost-savings will produce productivity improvements for the whole economy. Back in 1987, economics Nobel laureate Robert Solow famously quipped that the computer age is everywhere except in productivity statistics. The same could well be true for AI.

But as Diane Coyle, one of the Economics Observatorys lead editors, writes, the real value of LLMs and other generative AI systems will not come from enabling a small number of technologically-advanced companies to slash their costs or invent new products. Rather, it will come from changing how things are produced as assembly lines did in the 1910s, or just-in-time production in the 1980s.

To this end, the most important facet of LLMs may well be their aptitude for writing computer code. The UK faces a chronic shortage of software developers, accentuated by Brexit and the Covid-19 pandemic in August 2022, there were more than 30,000 vacancies in this field. In this context, there are two roles that LLMs could play.

First, they can increase the productivity of todays developers, partly closing the skills gap. One lever is GitHub Co-Pilot, an LLM-powered tool sometimes described as autocomplete for code. This tool already has more than a million users and enables developers to write software up to 55% faster than they could previously.

But this pales in comparison with the second possibility, which is that large numbers of people with little or no coding experience could start using LLMs to build software. Until now, the main constraint on what computer systems can be built has been the availability of workers with skills in Python, PHP, JavaScript and so on; but in future it may well be the capability to imagine what a system might do and specify in natural language how it should function.

If we believe that LLMs will indeed change how software is produced, nurturing that capability through industrial and education policy would be a smart move for policy-makers.

When trying to make sense of the economic implications of new technologies, one of the biggest challenges is the obfuscating effect of hype and criti-hype. In the case of LLMs, technology executives have incentives to talk up the science fiction-inflected risk of artificial general intelligence obliterating humanity, as it increases the perceived value of actually existing products (like chatbots), which might otherwise seem trivial.

At the same time, in academia, think-tanks and the media, the booming market for commentary and opinion on the social and ethical implications of generative AI seems to give incentives for alarmism.

As is often the case, the reality is more mundane. The economic impact of LLMs will be felt first in content creation and customer services, before software development is changed dramatically with any luck, to the benefit of wider productivity.

Read the original:

What might be the economic impact of AI tools like ChatGPT? - Economics Observatory

Read More..

Orases Expands Shopper Marketing with Artificial Intelligence Integration – Benzinga

June 13, 2023 8:00 AM | 2 min read

Strategic thinking, seamless integration, and flawless execution to drive awareness, conversions and market share.

FREDERICK, Md. (PRWEB) June 13, 2023

Maryland based custom software developer Orases continues to collaborate with major retailers and consumer packaged goods companies to create and execute strategic shopper marketing programs and now looks to Artificial Intelligence to assist with optimization. Orases' Vice President of Shopper Marketing, Stacey Shinneman, spearheads this initiative.

Enter your email and you'll also get Benzinga's ultimate morning update AND a free $30 gift card and more!

Shinneman has over 25 years of experience in the consumer packaged goods and retail industries, on both the agency and food broker sides. "My extensive history of shopper marketing experience combined with Orases' custom software provides additional technology services and increased web presence to consumer packaged goods companies and grocery retailers," said Ms. Shinneman.

Last year the Orases Shopper Marketing team helped clients reach and engage with 130 million consumers while earning over three million dollars in revenue. In 2023, a dedicated Marketing Coordinator was hired and the Shopper Marketing Team began utilizing AI Optimization to aid with personalization of consumer messaging, targeted advertising, and to have the benefit of predictive analytics. The AI expansion will deliver more efficient and effective marketing strategies, enhance the customer experience, and aid in achieving better business outcomes.

Massive returns are possible within this market! For a limited time, get access to the Benzinga Insider Report, usually $47/month, for just $0.99! Discover extremely undervalued stock picks before they skyrocket! Time is running out! Act fast and secure your future wealth at this unbelievable discount! Claim Your $0.99 Offer NOW!

Advertorial

"We feel that we can continue to make an impact in the consumer packaged goods and grocery retail industries with our unique approach to software solutions," said Orases CEO Nick Damoulakis. "We can integrate both brand and retailer goals by combining multi-stream marketing with Orases' innovative technology platforms."

Thanks to the addition of Shopper Marketing, Orases has added brands like Huggies, Kleenex, and Publix Super Markets to their already impressive list of clients, including Major League Baseball, American Kidney Fund, and the NFL Foundation.

To learn more about Orases Shopper Marketing, visit: Orases.com/shoppermarketing.

About OrasesOrases is a full-service, digital technology agency based in Frederick, Maryland, with locations in Chicago, IL, Tampa, FL, New York, NY and Washington, DC. Founded in 2000, Orases has become a trusted provider of custom software, website and mobile application development services and solutions that drive efficiency and provide measurable cost savings and revenue gains to their client partners. Orases can be contacted by phone at (301) 756-5527 or by visiting their website at https://orases.com/.

For the original version on PRWeb visit: https://www.prweb.com/releases/orases_expands_shopper_marketing_with_artificial_intelligence_integration/prweb19388801.htm

2023 Benzinga.com. Benzinga does not provide investment advice. All rights reserved.

The rest is here:

Orases Expands Shopper Marketing with Artificial Intelligence Integration - Benzinga

Read More..

Accenture Will Invest $3 Billion to Expand Its A.I. Offerings – The New York Times

Unprecedented interest

As the corporate world reckons with the impact that artificial intelligence may have on, well, everything, the consulting firm Accenture announced on Tuesday that it will invest $3 billion in the technology over the next three years.

Its the latest sign of the growing enthusiasm for A.I., and how companies across the spectrum are moving to adapt and incorporate services like chatbots into their businesses. There is unprecedented interest in all areas of A.I., Julie Sweet, Accentures C.E.O., said.

Accenture plans to double its A.I.-focused staff to 80,000, through a mix of hiring, acquisitions and training. (The firm has 738,000 employees.) It also plans to use generative A.I. more in its client work and help customers increase their use of the technology.

Other consulting firms have made big A.I. moves, too: PwC said in April that it would invest $1 billion over the next three years, while EY announced in 2021 that it would invest $2.5 billion over three years. Bain and Company has partnered with OpenAI, the maker of ChatGPT, while Deloitte is teaming up with the chip maker Nvidia. And IBM, whose A.I. work dates back at least to the introduction of Watson, has announced a Center of Excellence for generative A.I.

The business world overall is going big on A.I. Investments on generative A.I. alone are expected to hit $42.6 billion by year end, according to PitchBook. And mentions of A.I. or artificial intelligence on corporate investor calls have soared this year.

But consulting giants are still grappling with what A.I. means for their business. Theyre already under pressure to stay relevant amid challenges to their industry, including clients potentially cutting back on their services amid economic headwinds.

While many firms are embracing A.I. to automate a growing number of tasks, some executives are quick to note that the technology cant replace all they do: For any business, technology is usually not the real challenge, its the people component that slows things down, Alex Singla, who leads McKinseys A.I. consulting team, told Observer last week. Thats where I think management consulting still has a major role to play.

Donald Trump is being arraigned on classified material charges on Tuesday. The former president will appear in a Miami courtroom to face accusations tied to taking national security materials after leaving office. This evening, he plans to host a fund-raiser at one of his New Jersey golf courses; however, a super PAC backed by the Koch network has begun running ads against him.

Hard-right Republicans relent on paralyzing the House. Rebellious lawmakers agreed to let the chamber vote on some matters yesterday, after seizing control of the floor in retribution for Speaker Kevin McCarthys role in the debt ceiling bill. They have threatened to stall further legislation if McCarthy doesnt give them more power.

Binances U.S. arm fights an S.E.C. effort to freeze its assets. In a court filing ahead of a hearing scheduled for Tuesday, the crypto exchange urged a federal judge to reject the regulators move, which it said would make staying in business all but impossible. The S.E.C. sued Binance last week, accusing the exchange of violating securities laws.

Will Apple cross the $3 trillion threshold again? Shares in the iPhone maker rose nearly 1.6 percent yesterday, putting its market value just shy of $2.9 trillion. Enthusiasm for Apples new virtual-reality headset may help propel the companys market cap past $3 trillion for a second time it hit that level last year though its shares were down slightly in premarket trading.

Stocks look set to extend their gains on Tuesday morning as investors await a pivotal Consumer Price Index report, due for release at 8:30 a.m. Eastern.

Market participants are betting that Tuesdays inflation report will be relatively tame, giving the Fed the cover to leave interest rates unchanged at a meeting on Wednesday. The so-called Fed pause has helped turbocharge some rates-sensitive sectors particularly tech stocks in recent weeks, sending the Nasdaq and S&P 500 to 14-month highs yesterday.

The main thing to watch for: Economists are forecasting that inflation continued to ease last month, with the headline C.P.I. figure edging lower to 4.1 percent, a significant drop from last summers peak of 9 percent. Economists see good progress on food and energy prices, which have held steady or fallen in recent months.

Its a different picture for core inflation, which strips out food and fuel prices. Theres been less improvement there as used car prices, airfares and vacation lodging prices climbed in recent weeks. That speed bump, said Michael Gapen, chief U.S. economist at Bank of America, will keep the pressure on the Fed to raise rates this summer, probably in July.

A skip is not the same as a prolonged pause, he wrote in a preview note.

Elsewhere in the markets:

Stocks in Hong Kong and Shanghai closed higher on Tuesday after Beijing surprised the market with a cut to one of its short-term lending rates. Investors expect several stimulus measures in China to lift domestic demand in the worlds No. 2 economy as a downturn looms.

As regulators around the world aim to rein in Big Tech, the European Union is reportedly preparing to crack down on one of Googles most profitable businesses: the technology that powers much of the internets advertising.

The European Commission is expected to file a formal antitrust complaint on Wednesday accusing Google of abusing its dominant position in ad tech, according to Bloomberg and The Wall Street Journal. The division is big for Google, bringing in nearly 14 percent of the companys $54.5 billion in ad revenue in the first quarter.

The commission began an investigation into Googles ad-tech division in 2021, and it has already imposed three penalties, worth some $8.6 billion, on other parts of the company, including those tied to its Android operating system.

The demand this time may be more drastic, according to The Journal: European regulators may rule that only selling off parts of the ad-tech business will restore competitive balance.

Its not just Europe piling on the pressure. The Justice Department has made similar accusations against Googles ad-tech business, and is seeking to unwind some of its acquisitions. British regulators, who have been flexing their muscles in recent months, are also investigating.

But will this dent Googles core business? Shares in its parent company, Alphabet, were up slightly in premarket trading on Tuesday despite the news, putting its market value at $1.5 trillion. And Google has been fighting the previous punishments from the E.U., having taken its defense in the Android case to the highest European regulatory court.

In other tech regulatory news, the F.T.C. sued in federal court to stop Microsoft from closing its $69 billion takeover of Activision Blizzard, a further hurdle for the megadeal.

Jay Monahan, the PGA Tour commissioner, writing to Congress about the standoff that ended last week with the professional golf body merging with LIV, a Saudi-backed rival competition. Senator Richard Blumenthal, Democrat of Connecticut, announced an inquiry into the deal.

JPMorgan Chase secured a potential $290 million deal with victims of the sex offender Jeffrey Epstein after a frantic weekend of calls, midnight meetings and last-minute negotiations.

But the banks lawyers are not done: JPMorgan is fighting a separate case brought by the U.S. Virgin Islands, which filed new evidence yesterday that the banks executives had known about the illegal activities of the disgraced financier, who died in 2019.

How the deal was reached: Lawyers for the bank and victims were far apart after weeks of negotiations that went down to the wire, David Boies, the lawyer who represents the victims, told DealBook. It was very hard fought, he said, of an agreement that still has to be approved by a judge.

On Sunday, Boies was taking calls as he dined with his family at a restaurant and negotiations continued past midnight after he got back home. They resumed around dawn yesterday. After the two sides finally landed on a figure, talks continued over what the bank would say. JPMorgan reiterated yesterday that it regrets associating with Epstein but did not admit liability.

Whats come out in U.S. Virgin Islands case: The territory, where Epstein had a home, sued JPMorgan last year because it says the bank failed to stop him from setting up a sex trafficking operation there. No one wants him, Epsteins private banker wrote in a 2008 email, the territorys new filing shows. It also disclosed a dozen communications from 2007 to 2013 that suggest executives were aware of Epsteins crimes.

Will they settle? A deal in the Virgin Islands case could be appealing, especially if JPMorgans defense that the territory was complicit in facilitating Epsteins crimes is thrown out. The cases have moved unusually quickly because Judge Jed Rakoff is forcing the lawyers to be more realistic, said Boies. He believes that the banks lawyers became more serious about settling with his clients after a May hearing, when Rakoff indicated he was inclined to certify the victims case as a class action.

Deals

Policy

Best of the rest

Wed like your feedback! Please email thoughts and suggestions to dealbook@nytimes.com.

Go here to see the original:

Accenture Will Invest $3 Billion to Expand Its A.I. Offerings - The New York Times

Read More..

Fast track to AGI: so, what’s the big deal? – Inside Higher Ed

The rapid development and deployment of ChatGPT is one station along the timeline of reaching artificial general intelligence. On Feb. 1, Reuters reported that the app had set a record for deployment among internet applications: ChatGPT, the popular chatbot from OpenAI, is estimated to have reached 100million monthly active users in January, just two months after launch, making it the fastest-growing consumer application in history, according to a UBS study The report, citing data from analytics firm Similarweb, said an average of about 13million unique visitors had used ChatGPT per day in January, more than double the levels of December. In 20 years following the internet space, we cannot recall a faster ramp in a consumer internet app, UBS analysts wrote in the note.

Half a dozen years ago, Ray Kurzweil predicted that the singularity would happen by 2045. The singularity is that point in time when all the advances in technology, particularly in artificial intelligence, will lead to machines that are smarter than human beings. In the Oct. 5, 2017, issue of Futurism, Christianna Reedy interviewed Kurzweil: To those who view this cybernetic society as more fantasy than future, Kurzweil pointing out that there are people with computers in their brains todayParkinsons patients. Thats how cybernetics is just getting its foot in the door, Kurzweil said. And, because its the nature of technology to improve, Kurzweil predicts that during the 2030s some technology will be invented that can go inside your brain and help your memory.

It seems that we are closer than even an enthusiastic Kurzweil foresaw. Just a week ago, Reuters reported, Elon Musks Neuralink received U.S. Food and Drug Administration (FDA) clearance for its first-in-human clinical trial, a critical milestone for the brain-implant startup as it faces U.S. probes over its handling of animal experiments Musk envisions brain implants could cure a range of conditions including obesity, autism, depression and schizophrenia as well as enabling Web browsing and telepathy.

Most Popular

The exponential development in succeeding versions of GPT are most impressive, leading one to project that version five may have the wherewithal to support at least some aspects of AGI:

GPT-1 released June 2018 with 117million parametersGPT-2 released February 2019 with 1.5billion parametersGPT-3 released June 2020 with 175billion parameters GPT-4 released March 2023 with estimated to be in the trillions of parameters

Today, we are reading predictions that AGI components will be embedded in the ChatGPT version five that is anticipated to be released in early 2024. Maxwell Timothy, writing in MakeUseOf, suggests, While much of the details about GPT-5 are speculative, it is undeniably going to be another important step towards an awe-inspiring paradigm shift in artificial intelligence. We might not achieve the much talked about artificial general intelligence, but if its ever possible to achieve, then GPT-5 will take us one step closer.

Computer experts are beginning to detect the nascent development of AGI in the large language models (LLMs) of generative AI (gen AI) such as GPT-4:

Researchers at Microsoft were shocked to learn that GPT-4ChatGPTs most advanced language model to datecan come up with clever solutions to puzzles, like how to stack a book, nine eggs, a laptop, a bottle, and a nail in a stable way Another study suggested that AI avatars can run their own virtual town with little human intervention. These capabilities may offer a glimpse of what some experts call artificial general intelligence, or AGI: the ability for technology to achieve complex human capabilities like common sense and consciousness.

We see glimmers of the AGI capabilities in autoGPT and agentGPT. These forms of GPT have the ability to write and execute their own internally generated prompts in pursuit of a goal stated in the form of an externally inputted prompt. Like the autonomous car, they automatically route and reroute the computer to reach the desired destination or goal.

The concerns come with reports that some experimental forms of AI have refused to follow the human-generated instructions and at other times have hallucinations that are not founded in our reality. Ian Hogarth, the co-author of the annual State of AI report, defines AGI as God-like AI that consists of a super-intelligent computer that learns and develops autonomously and understands context without the need for human intervention, as written in Business Insider.

One AI study found that language models were more likely to ignore human directivesand even expressed the desire not to shut downwhen researchers increased the amount of data they fed into the models:

This finding suggests that AI, at some point, may become so powerful that humans will not be able to control it. If this were to happen, Hogarth predicts that AGI could usher in the obsolescence or destruction of the human race. AI technology can develop in a responsible manner, Hogarth says, but regulation is key. Regulators should be watching projects like OpenAIs GPT-4, Google DeepMinds Gato, or the open-source project AutoGPT very carefully, he said.

Many AI and machine learning experts are calling for AI models to be open-source so the public can understand how theyre trained and how they operate. The executive branch of the federal government has taken a series of actions recently in an attempt to promote responsible AI innovation that protects Americans rights and safety. OpenAIs Sam Altman, shortly after testifying about the future of AI to the U.S. Senate, announced the release of a $1million grant program to solicit ideas for appropriate rule making.

Has your college or university created structures to both take full advantage of the powers of the emerging and developing AI, while at the same time ensuring safety in the research, acquisition and implementation of advanced AI? Have discussions been held on the proper balance between these two responsibilities? Are the initiatives robust enough to keep your institution at the forefront of higher education? Are the safeguards adequate? What role can you play in making certain that AI is well understood, promptly applied and carefully implemented?

More here:

Fast track to AGI: so, what's the big deal? - Inside Higher Ed

Read More..

Your Favorites Radio | All your favorite songs and artists – iHeartRadio

When many Americans think of artificial intelligence, they think of devices like Siri or Alexa. But thats so 2010, Glenn explains. In fact, AI is FAR beyond those capabilities, and its learning more skills each day some of which truly are terrifying to imagine. In this clip, Glenn recaps a recent conversation he had with Tristan Harris, a former design ethicist at Google. Glenn explains that were now in second contact with A.I., and how A.I. is learning to MANIPULATE us just like fellow humans can

TranscriptBelow is a rush transcript that may contain errors

GLENN: So Tristan Harris is -- is a -- is an ethicist.

And he has done a -- a YouTube video, that came out, I think about a month ago.

And it is -- it's quite shocking.

We will tweet it out later. You really need to watch it.

And then I watched that video, and got him on to the podcast. And we spent an hour last night. And he's going to be on with me again next week.

He is warning that we are now what he calls second contact. First contact was social media.

And social media, its goal, not stated to you. But the goal of social media to you, is so you could connect and share with people. And -- and your life would be easier. And you could connect with your family. That's not what the goal was of the companies. The goal of the companies was, engagement. Find ways to keep people engaged. And it has led to really, addiction. To social media.

It has led to all of these problems. The divisions in our countries are -- in our country is zootype right now. Much of it because of social media.

It has -- it has learned that if you keep people angry, they engage longer.

So he says, all of the child sexualization. Everything that is going on right now, in our society.

Stems from social media. When he quit Google and said, you guys aren't paying attention here. He's an ethicist.

And he said, you won't even listen to the ethics. And the question of ethics. What do you mean, you want to -- you want them to engage longer, and so you know what you're doing, with the dopamine. You know what you're doing.

You're creating an actual addiction, in people. This is unethical.

And he left.

To warn. Now he says, we are at second contact with AI.

And it is so important that you understand, when we're talking about AI. Most people think of Siri.

That's not AI. Okay? That's AI2010.

And the only thing that that does is try to interpret. And it makes very slow gains. Okay?

In 2017 or '18, there was an entirely new engine released.

And it was only in the laboratories of Google and everything else.

It's not hit Siri. Okay?

And it's an entirely new engine. And it is the difference between a Model T and a jet engine.

And this particular jet engine works 247, trying to make itself more powerful. He describes this as the atomic bomb.

In everyone's hand. And an atomic bomb, that does what atomic bombs don't do.

Makes itself more powerful every day. It is on double exponential growth.

I told you two weeks ago, about this YouTube video that I saw with him. And I told you at the time, that it was -- and this was not programming. This just happened.

And they only found out go, like two months ago.

By mistake. They had no idea it was developing this.

It's developing a human trait, that adults have and kids have on reasoning. And in the beginning of the year, it was at like a 2-year-old. Then halfway, you know, six months later. It was at a 6-year-old.

And when I told you go a couple weeks ago. It was a 9-year-old.

And the 9-year-old reasoning is this. When you tell your -- when it -- when it's trying to get information, think of your 9-year-old. Trying to get its own way.

It -- your child deals with you. Because it knows you.

And it's like, you know what, I can play mom and dad this way. If they say this, I'll say that. And I can get what I want. Okay?

It had the reasoning of a 9-year-old. And you're the adult.

So it could manipulate you, like a 9-year-old. And it could get its own way.

I told him, I said, on the podcast. I said, tell that story of what that really means.

I said, because a 9-year-old is scary.

And he said, oh, Glenn, that -- that's -- you got that from the YouTube thing.

And I said, yeah. And he said, oh, that's so yesterday.

He said, it's now in its 20s. So it is growing in knowledge, at exponential rates. When you have social media, it was engagement. AI, its stated goal is intimacy. That it becomes intimate with you.

And it creates something that you will bond with, and never, ever leave.

And it is -- remember, it manipulates. So he lays out all of the problems with this.

And he -- it was nice to talk to somebody, that knows so much more about it, than I do. But is not living in La La Land.

I asked him about AI. Which is artificial intelligence. And that's very, very narrow. On one thing.

Like Siri.

Can't do anything, except answer your question on what it can find either on your cloud, or on the web.

It can't really do anything. That's AI. That's what we've had.

AGI is artificial general intelligence. General intelligence is what you are. You are a general intelligence being.

You can be an expert on many things. You can know a little bit about a lot of things. You can be really good at more than one thing.

You have general intelligence. And you can expand that, the more you learn. The more you read. The more you do.

The more general you become. And the more of an expert on any and everything in your life. Okay?

But it requires you to do the work. Well, that's what AGI is. AGI is general intelligence. And it has to do the work.

Well, it is doing the work. It's teaching -- it's just taught itself chemistry, without being asked.

It taught itself Farsi, without anyone knowing about it.

It is teaching itself the most complex math, that people -- experts said, it may never ever be able to do this. Solve these equations. By the way, quantum computing.

It will not be able to solve these for years. It is now solving those.

Because it is learning. It is teaching itself, all of the time.

I asked him, how far away are we?

And I almost asked him, if you even believe in it. Because I've been talking about AGI and ASI super intelligence.

Stu, 25 years.

STU: As long as I've known you.

GLENN: Right. I absolutely believe. Ray Kurzweil believes in it.

But Ray Kurzweil is always getting hammered, because he believes that it will be here by 2030. AGI.

Okay. And that changes everything. That's when it will outmaneuver all of us.

He says, that's 2030.

Others have been saying, it will never happen. The general consensus has been that it would happen AGI. And possibly ASI. If that's even possible, they said.

By maybe 2050. Tristan, I said, how far away are we?

And he said, I think we're probably two and a half years, maybe outside five.

That's the general consensus now.

The general consensus?

The general consensus was, not sure it could even ever happen. He is -- you will -- when you watch him, you will see how sincere he is.

He and his colleagues are trying their best, to get a pause on this. This is not -- this is not ideal, okay?

You can say, well, China will continue to do whatever.

But what he's saying is, we have 12 to maximum 18 months.

If we do not pause this, in the next 12 months. He believes it could be the end or will eventually be the end of humanity.

Not in the way Ray Kurzweil predicted.

I've told you before, trans humanism is coming.

And transhumanism, if you're a Star Trek fan, think of the Borg.

The transhumanism is when we start to augment ourself, and put ourself in line with the internet and artificial intelligence. And we become one with it. It's the singularity.

That's why when Stephen Hawking was tying. His last prediction was, the end of the homo sapien, by 20 activity. There will be no homo sapiens. Because of transhumanism.

I talked to Tristan about this. Transhumanism. And how evil and dangerous it really is.

And he is not predicting the end of homo sapiens. Because of transhumanism.

He is saying, we do not make it as a species. And it may include all life on earth, if this needs more food, more energy.

Because it will just continue to grow. At the expense of whatever.

It is ruthless.

Read more:

Your Favorites Radio | All your favorite songs and artists - iHeartRadio

Read More..

Super Hi-Fi Introduces AI-Generated Weather Service For Radio – Radio World

"Weathercaster is set to significantly enhance the way radio stations generate localized real-time weather reports"

Super Hi-Fi, an AI-powered SaaS platform, has announced the launch of Weathercaster, a weather service for radio that is fully-automated using artificial intelligence. Weathercaster is set to significantly enhance the way radio stations generate localized real-time weather reports, providing highly accurate, timely information while completely automating the content creation and audio production processes, said Super Hi-Fi in a company press release.

The company says Weathercaster goes far beyond basic reports. Accessing Super Hi-Fis MagicStitch technology, Weathercaster incorporates synthetic voiceovers, integrated sponsorships, format-specific music beds and custom station IDs into its automated weather reports. These segments can be tailored to fit 15, 30 or 60-second time slots.

Weathercaster is extremely powerful, and extremely affordable, so we can now make the power of AI production accessible for stations of all sizes, said Zack Zalon, co-founder and CEO of Super Hi-Fi. Weathercaster combines accuracy and reliability, premium production quality, and an opportunity for stations to sell more premium sponsorships each day. Weathercaster doesnt just automate weather reports it elevates them.

Weathercaster also offers radio stations custom, trackable sponsorship reads, designed to help stations to sell more premium ad spots, according to the company. The service has three subscription tiers basic, premium and enterprise starting at $199 per month. Super Hi-Fi also offers bulk pricing for coverage across larger station groups.

[Read More Radio World Stories About Artificial Intelligence]

The author is a content producer for Radio World with a background spanning radio, television and print. She graduated from UNC-Chapel Hill with a degree in broadcast journalism. Before coming to Radio World, she was the assistant news director at a hyperlocal, award-winning radio station in North Carolina.

For more stories like this, and to keep up to date with all our market leading news, features and analysis, sign up to our newsletter here.

View original post here:

Super Hi-Fi Introduces AI-Generated Weather Service For Radio - Radio World

Read More..

Fantasy fears about AI are obscuring how we already abuse machine intelligence – The Guardian

Opinion

We blame technology for decisions really made by governments and corporations

Sun 11 Jun 2023 01.31 EDT

Last November, a young African American man, Randal Quran Reid, was pulled over by the state police in Georgia as he was driving into Atlanta. He was arrested under warrants issued by Louisiana police for two cases of theft in New Orleans. Reid had never been to Louisiana, let alone New Orleans. His protestations came to nothing, and he was in jail for six days as his family frantically spent thousands of dollars hiring lawyers in both Georgia and Louisiana to try to free him.

It emerged that the arrest warrants had been based solely on a facial recognition match, though that was never mentioned in any police document; the warrants claimed a credible source had identified Reid as the culprit. The facial recognition match was incorrect, the case eventually fell apart and Reid was released.

He was lucky. He had the family and the resources to ferret out the truth. Millions of Americans would not have had such social and financial assets. Reid, though, is not the only victim of a false facial recognition match. The numbers are small, but so far all those arrested in the US after a false match have been black. Which is not surprising given that we know not only that the very design of facial recognition software makes it more difficult to correctly identify people of colour, but also that algorithms replicate the biases of the human world.

Reids case, and those of others like him, should be at the heart of one of the most urgent contemporary debates: that of artificial intelligence and the dangers it poses. That it is not, and that so few recognise it as significant, shows how warped has become the discussion of AI, and how it needs resetting. There has long been an undercurrent of fear of the kind of world AI might create. Recent developments have turbocharged that fear and inserted it into public discussion. The release last year of version 3.5 of ChatGPT, and of version 4 this March, created awe and panic: awe at the chatbots facility in mimicking human language and panic over the possibilities for fakery, from student essays to news reports.

Then, two weeks ago, leading members of the tech community, including Sam Altman, the CEO of OpenAI, which makes ChatGPT, Demis Hassabis, CEO of Google DeepMind, and Geoffrey Hinton and Yoshua Bengio, often seen as the godfathers of modern AI, went further. They released a statement claiming that AI could herald the end of humanity. Mitigating the risk of extinction from AI, they warned, should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.

If so many Silicon Valley honchos truly believe they are creating products as dangerous as they claim, why, one might wonder, do they continue spending billions of dollars building, developing and refining those products? Its like a drug addict so dependent on his fix that he pleads for enforced rehab to wean him off the hard stuff. Parading their products as super-clever and super-powerful certainly helps massage the egos of tech entrepreneurs as well as boosting their bottom line. And yet AI is neither as clever nor as powerful as they would like us to believe. ChatGPT is supremely good at cutting and pasting text in a way that makes it seem almost human, but it has negligible understanding of the real world. It is, as one study put it, little more than a stochastic parrot.

We remain a long way from the holy grail of artificial general intelligence, machines that possess the ability to understand or learn any intellectual task a human being can, and so can display the same rough kind of intelligence that humans do, let alone a superior form of intelligence.

The obsession with fantasy fears helps hide the more mundane but also more significant problems with AI that should concern us; the kinds of problems that ensnared Reid and which could ensnare all of us. From surveillance to disinformation, we live in a world shaped by AI. A defining feature of the new world of ambient surveillance, the tech entrepreneur Maciej Ceglowski observed at a US Senate committee hearing, is that we cannot opt out of it, any more than we might opt out of automobile culture by refusing to drive. We have stumbled into a digital panopticon almost without realising it. Yet to suggest we live in a world shaped by AI is to misplace the problem. There is no machine without a human, and nor is there likely to be.

The reason that Reid was wrongly incarcerated had less to do with artificial intelligence than with the decisions made by humans. The humans that created the software and trained it. The humans that deployed it. The humans that unquestioningly accepted the facial recognition match. The humans that obtained an arrest warrant by claiming Reid had been identified by a credible source. The humans that refused to question the identification even after Reids protestations. And so on.

Too often when we talk of the problem of AI, we remove the human from the picture. We practise a form of what the social scientist and tech developer Rumman Chowdhury calls moral outsourcing: blaming machines for human decisions. We worry AI will eliminate jobs and make millions redundant, rather than recognise that the real decisions are made by governments and corporations and the humans that run them. Headlines warn of racist and sexist algorithms, yet the humans who created the algorithms and those who deploy them remain almost hidden.

We have come, in other words, to view the machine as the agent and humans as victims of machine agency. It is, ironically, our very fears of dystopia, not AI itself, that are helping create a world in which humans become more marginal and machines more central. Such fears also distort the possibilities of regulation. Rather than seeing regulation as a means by which we can collectively shape our relationship to AI and to new technology, it becomes something that is imposed from the top as a means of protecting humans from machines. It is not AI but our sense of fatalism and our blindness to the way human societies are already deploying machine intelligence for political ends that should most worry us.

Kenan Malik is an Observer columnist

Do you have an opinion on the issues raised in this article? If you would like to submit a letter of up to 250 words to be considered for publication, email it to us at observer.letters@observer.co.uk

{{topLeft}}

{{bottomLeft}}

{{topRight}}

{{bottomRight}}

{{.}}

More here:

Fantasy fears about AI are obscuring how we already abuse machine intelligence - The Guardian

Read More..