Page 1,282«..1020..1,2811,2821,2831,284..1,2901,300..»

From Amazon to Wendy’s, how 4 companies plan to incorporate AIand how you may interact with it – CNBC

Smith Collection/Gado | Archive Photos | Getty Images

Artificial intelligence is no longer limited to the realm of science-fiction novels it's increasingly becoming a part of our everyday lives.

AI chatbots, such as OpenAI's ChatGPT, are already being used in a variety of ways, from writing emails to booking trips. In fact, ChatGPT amassed over 100 million users within just months of launching.

But AI goes beyond large language models (LLMs) like ChatGPT. Microsoft defines AI as "the capability of a computer system to mimic human-like cognitive functions such as learning and problem-solving."

For example, self-driving cars use AI to simulate the decision-making processes a human driver would usually make while on the road such as identifying traffic signals or choosing the best route to reach a given destination, according to Microsoft.

AI's boom in popularity has many companies racing to integrate the technology into their own products. In fact, 94% of business leaders believe that AI development will be critical to the success of their business over the next five years, according to Deloitte's latest survey.

For consumers, this means AI may be coming to a store, restaurant or supermarket nearby. Here are four companies that are already utilizing AI's capabilities and how it may impact you.

Amazon delivery package seen in front of a door.

Sopa Images | Lightrocket | Getty Images

Amazon uses AI in a number of ways, but one strategy aims to get your orders to you faster, Stefano Perego, vice president of customer fulfilment and global ops services for North America and Europe at Amazon, told CNBC on Monday.

The company's "regionalization" plan involves shipping products from warehouses that are closest to customers rather than from a warehouse located in a different part of the country.

To do that, Amazon is utilizing AI to analyze data and patterns to determine where certain products are in demand. This way, those products can be stored in nearby warehouses in order to reduce delivery times.

Screens displaying the logos of Microsoft and ChatGPT, a conversational artificial intelligence application software developed by OpenAI.

Lionel Bonaventure | Afp | Getty Images

Microsoft is putting its $13 billion investment in OpenAI to work. In March, the tech behemoth announced that a new set of AI features, dubbed Copilot, will be added to its Microsoft 365 software, which includes popular apps such as Excel, PowerPoint and Word.

When using Word, for example, Copilot will be able to produce a "first draft to edit and iterate on saving hours in writing, sourcing, and editing time," Microsoft says. But Microsoft acknowledges that sometimes this type of AI software can produce inaccurate responses and warns that "sometimes Copilot will be right, other times usefully wrong."

A Brain Corp. autonomous floor scrubber, called an Auto-C, cleans the aisle of a Walmart's store. Sam's Club completed the rollout of roughly 600 specialized scrubbers with inventory scan towers last October in a partnership Brain Corp.

Source: Walmart

Walmart is using AI to make sure shelves in its nearly 4,700 stores and 600 Sam's Clubs stay stocked with your favorite products. One way it's doing that: automated floor scrubbers.

As the robotic scrubbers clean Sam's Club aisles, they also capture images of every item in the store to monitor inventory levels. The inventory intelligence towers located on the scrubbers take more than 20 million photos of the shelves every day.

The company has trained its algorithms to be able to tell the difference between brands and determine how much of the product is on the shelf with more than 95% accuracy, Anshu Bhardwaj, senior vice president of Walmart's tech strategy and commercialization, told CNBC in March. And when a product gets too low, the stock room is automatically alerted to replenish it, she said.

A customer waits at a drive-thru outside a Wendys Co. restaurant in El Sobrante, California, U.S.

Bloomberg | Bloomberg | Getty Images

An AI chatbot may be taking your order when you pull up to a Wendy's drive-thru in the near future.

The fast-food chain partnered with Google to develop an AI chatbot specifically designed for drive-thru ordering, Wendy's CEO Todd Penegor told CNBC last week. The goal of this new feature is to speed up ordering at the speaker box, which is "the slowest point in the order process," the CEO said.

In June, Wendy's plans to test the first pilot of its "Wendy's FreshAI" at a company-operated restaurant in the Columbus, Ohio area, according to a May press release.

Powered by Google Cloud's generative AI and large language models, it will be able to have conversations with customers, understand made-to-order requests and generate answers to frequently asked questions, according to the company's statement.

DON'T MISS: Want to be smarter and more successful with your money, work & life?Sign up for our new newsletter!

Get CNBC's free report,11 Ways to Tell if We're in a Recession,where Kelly Evans reviews the top indicators that a recession is coming or has already begun.

CHECK OUT: Mark Cuban says the potential impact of AI tools like ChatGPT is beyond anything Ive ever seen in tech

Go here to read the rest:

From Amazon to Wendy's, how 4 companies plan to incorporate AIand how you may interact with it - CNBC

Read More..

‘Heart wrenching’: AI expert details dangers of deepfakes and tools to detect manipulated content – Fox News

While some uses of deepfakes are lighthearted like the pope donning a white Balenciaga puffer jacket or an AI-generated song using vocals from Drake and The Weeknd, they can also sow doubt about the authenticity of legitimate audio and videos.

Criminals are taking advantage of the technology to conduct misinformation campaigns, commit fraud and obstruct justice. As artificial intelligence (AI) continues to advance, so does the proliferation of fake content that experts warn could pose a serious threat to various aspects of everyday life if proper controls aren't put in place.

AI-manipulated images, videos and audio known as "deepfakes" are often used to create convincing but false representations of people and events. Because deepfakes are difficult for the average consumer to detect, companies like Pindrop are working to help companies and consumers identify what's real and what's fake.

AI manipulated images, videos and audio, known as "deepfakes" are often used to create convincing but false representations of people and events. (iStock)

Pindrop co-founder and CEO, Vijay Balasubramaniyan, said his company looks at security, identity and intelligence in audio communications to help the top banks, insurance companies and health care providers in the world determine whether they are talking to a human on the other end of the line.

Balasubramaniyan said Pindrop is at the forefront of AI security and has analyzed more than five billion voice interactions, two million of which they identified as fraudsters using AI to try to convince a caller they are human.

He explained that when you call a business with sensitive information like a bank, insurance company or health care provider, they verify it's you by asking a multitude of security questions, but Pindrop replaces that process and instead verifies people based on their voice, device and behavior.

WHEN WILL ARTIFICIAL INTELLIGENCE ANSWER EMAILS? EXPERTS WEIGN IN ON HOW THE TECHNOLOGY WILL AFFECT WORK

"We're seeing very specific targeted attacks," he said. "If I'm the CEO of a particular organization, I probably have a lot of audio content out there, video content out there, [so fraudsters] create a deepfake of that person to go after them for their bank accounts [and] their health care records."

While Pindrop mainly focuses on helping large companies avoid AI scams, Balasubramaniyan said he eventually wants to expand his technology to help the individual consumer because the problem is affecting everyone.

He predicts audio and video breaches are only going to become more common because if people have "tons of audio or tons of video of a particular person, you can create their likeness a whole lot easier."

"Once they have a version of your audio or your video, they can actually start creating versions of you," he said. "Those versions of you can be used for all kinds of things to get bank account information, to get health care records, to get to talk to your parents or a loved one claiming to be you. That's where technology like ours is super important."

He explained that AI and machine learning (ML) systems work by learning from the information that already exists and building upon that knowledge.

AI and machine learning (ML) systems work by learning from the information that already exists and building upon that knowledge. (getty images)

"The more of you that's out there, the more likely it is to create a version of you and a human is not going to figure out who that is," he said.

He said there are some telltale signs that can indicate a call or video is a deepfake, such as a time lag between when a question is asked and an answer is given, which can actually work in the scammer's favor because it leads the person on the other end of the line to believe something is wrong.

"When a call center agent is trying to help you and you don't respond immediately, they actually think, 'Oh man, this person is unhappy or I didn't say the right thing,'" he explained. "Therefore many of them actually start divulging all kinds of things."

"The same thing is happening on the consumer side when you are getting a call from your daughter, your son saying, 'There's a problem, I've been kidnapped' and then you have this really long pause," he added. "That pause is unsettling, but it's actually a sign that someone's using a deepfake because they have to type the answer and the system has to process that."

OPENAI CHIEF ALTMAN DESCRIBED WHAT SCARY AI MEANS TO HIM, BUT CHATGPT HAS ITS OWN EXAMPLES

In an experiment conducted by Pindrop, people were given examples of audio and asked to determine if they thought it was authentic.

"When we did it across a wide variety of humans, they got it right 54% of times," he said. "What that means is they're 4% better than a monkey who did a coin toss."

As it becomes more difficult to ascertain who is human and who is a machine, it is important to adopt technology that allows you to make that determination, Balasubramaniyan argued.

"But the scarier thing for me is our democracy," he added. "We're coming up to an election cycle in the next year, and you're seeing ads, you're seeing images."

For example, the leading candidate of a campaign could be smeared by a series of deepfakes or there might be authentic content that puts a candidate in a bad light, but they can deny it by using AI as a scapegoat.

In the lead-up to his recent New York arraignment, deepfakes of former President Trump's mugshot, as well as fake photos showing him resisting arrest, went viral on the internet.

"If something is too good to be true or too sensational, think twice," he said. "Don't react immediately people get too worked up or react too much to a particular thing in the immediate moment."

CRITICS SAY AI CAN THREATEN HUMANITY, BUT CHATGPT HAS ITS OWN DOOMSDAY PREDICTIONS

Balasubramaniyan said people need to be increasingly skeptical about what they are hearing and viewing and warned that if a voice seems robotic, a video is choppy, there is background noise, pauses between questions or the subject isn't blinking, they should exercise caution and assume it is a deepfake.

He said this added caution is especially important if the video or message appeals to your emotions, which can lead to "heart-wrenching" consequences if a loved one gets a call about you or your grandparent is coerced into forking over their hard-earned money, as well as instances where a woman's image and likeness is used to generate deep fake pictures or videos.

Some of the most successful companies in the business profit off of AI companionship to generate fake boyfriends, or more often according to Balasubramaniyan, fake boyfriends with certain qualities or capabilities.

Balasubramaniyan argued that as it becomes more difficult to ascertain who is human and who is machine, it is important to adopt technology that allows you to make that determination. (Photo by MARCO BERTORELLO/AFP via Getty Images)

"Because not only are deepfakes being created that are deepfakes of you, but then they're creating deepfakes or synthetic identities that have no bearing, but have some likeness to human," he warned. "Both of those things you have to be vigilant about."

Balasubramaniyan often hearkens back to the creation of the internet to quell many of the concerns people have about AI and explained that we simply need more time to ameliorate some of the negative consequences of the new technology.

"When the Internet was created, if you looked at all the content on the Internet, it was the degenerates using it, like it was awful, all kinds of nefarious things would happen on it," he said. "If you just go back down history lane to the '90s, it was filled with stuff like this."

"Over time, you build security, you build the ability for you to now have a checkmark on your website to say this is a good website," he added.

The same thing will happen with AI if people take back control through a combination of technology and human vigilance, Balasubramaniyan said.

CLICK HERE TO GET THE FOX NEWS APP

"You're going to have a lot of bad use cases, a lot of degenerates using it, but you as a consumer have to stay vigilant," he said. "Otherwise you're going to get the shirt taken off your back."

View original post here:

'Heart wrenching': AI expert details dangers of deepfakes and tools to detect manipulated content - Fox News

Read More..

We Put Google’s New AI Writing Assistant to the Test – WIRED

But its work began to look sloppy on more specific requests. Asked to write a memo on consumer preferences in Paraguay compared to Uruguay, the system incorrectly described Paraguay as less populous. It hallucinated, or made up, the meaning behinda song from a 1960s Hindi film being performed at my pre-wedding welcome event.

Most ironically, when prompted about the benefits of Duet AI, the system described Duet AI as a startup founded by two former Google employees to develop AI for the music industry with over $10 million in funding from investors such as Andreessen Horowitz and Y Combinator. It appears no such company exists. Google encourages users to report inaccuracies through a thumbs-down button below AI-generated responses.

Behr says Google screens topics, keywords, and other content cues to avoid responses that are offensive or unfairly affect people, especially based on their demographics or political or religious beliefs. She acknowledged that the system makes mistakes, but she said feedback from public testing is vital to counter the tendency of AI systems to reflect biases seen in their training data or pass off made-up information. AI is going to be a forever project, she says.

Still, Behr says early users, like employees at Instacart and Victorias Secrets Adore Me underwear brand, have been positive about the technology. Instacart spokesperson Lauren Svensson saysin a manually written emailthat the company is excited about testing Googles AI features but not ready to share any insights.

My tests left me worrying that AI writing aids could extinguish originality, to the detriment of humans on the receiving end of AI-crafted text. I envision readers glazing over at stale emails and documents as they might if forced to read Googles nearly 6,000-word privacy policy. Its unclear how much individual personality Googles tools can absorb and whether they will come to assist us or replace us.

Behr says that in Googles internal testing, emails from colleagues have not become vanilla or generic so far. The tools have boosted human ingenuity and creativity, not suppressed them, she says. Behr too would love an AI model that imitates her style, but she says those are the types of things that we're still evaluating.

Despite their disappointments and limitations, the Duet features in Docs and Gmail seem likely to lure back some users who began to rely on ChatGPT or rival AI writing software. Google is going further than most other options can match, andwhat we are seeing today is only a preview of whats to come.

Whenor ifDuet matures from promising drafter to unbiased and expert document finisher, usage of it will become unstoppable. Until then, when it comes to writing those heartfelt vows and speeches, thats a blank screen left entirely to me.

Visit link:

We Put Google's New AI Writing Assistant to the Test - WIRED

Read More..

Here’s What AI Thinks an Illinoisan Looks Like And Apparently, Real Illinoisans Agree – NBC Chicago

Does this person look like he lives in Illinois? AI thinks so. And a handful of posts, allegedly from real people on social media, agree.

That's the basis of a Reddit post titled "The Most Stereotypical People in the States." The post, shared in a section of Reddit dedicated to discussions on Artificial Intelligence, shares AI-generated photos of what the an average person looks like in each state.

The results, according to commenters, are relatively accurate -- at least for Illinois.

Each of the photos shows the portrait of person, most often a male, exhibiting some form of creative expression -- be it through clothing, environment, facial expression or otherwise -- that's meant to clearly represent a location.

For example, the AI-generated photo of a stereotypical person shows a man sitting behind a giant block of cheese.

A stereotypical person in Illinois, according to the post, appears less distinctive, and rather ordinary. In fact, one commenter compares the man from Illinois to Waldo.

"Illinois is Waldo," the comment reads.

"Illinois," another begins. "A person as boring as it sounds to live there."

To other commenters, the photo of the average person who lives in Illinois isn't just dull. It's spot on.

"Hahaha," one commenter says. "Illinois is PRECISELY my brother-in-law."

"Illinois' is oddly accurate," another says.

Accurate or not, in nearly all the AI-generated photos -- Illinois included -- no smiles are captured, with the exception of three states: Connecticut, Hawaii and West Virginia.

You can take a spin through all the photos here. Just make sure you don't skip over Illinois, since, apparently, that one is easy to miss.

The rest is here:

Here's What AI Thinks an Illinoisan Looks Like And Apparently, Real Illinoisans Agree - NBC Chicago

Read More..

Elections in UK and US at risk from AI-driven disinformation, say experts – The Guardian

Politics and technology

False news stories, images, video and audio could be tailored to audiences and created at scale by next spring

Sat 20 May 2023 06.00 EDT

Next years elections in Britain and the US could be marked by a wave of AI-powered disinformation, experts have warned, as generated images, text and deepfake videos go viral at the behest of swarms of AI-powered propaganda bots.

Sam Altman, CEO of the ChatGPT creator, OpenAI, told a congressional hearing in Washington this week that the models behind the latest generation of AI technology could manipulate users.

The general ability of these models to manipulate and persuade, to provide one-on-one interactive disinformation is a significant area of concern, he said.

Regulation would be quite wise: people need to know if theyre talking to an AI, or if content that theyre looking at is generated or not. The ability to really model to predict humans, I think is going to require a combination of companies doing the right thing, regulation and public education.

The prime minister, Rishi Sunak, said on Thursday the UK would lead on limiting the dangers of AI. Concerns over the technology have soared after breakthroughs in generative AI, where tools like ChatGPT and Midjourney produce convincing text, images and even voice on command.

Where earlier waves of propaganda bots relied on simple pre-written messages sent en masse, or buildings full of paid trolls to perform the manual work of engaging with other humans, ChatGPT and other technologies raise the prospect of interactive election interference at scale.

An AI trained to repeat talking points about Taiwan, climate breakdown or LGBT+ rights could tie up political opponents in fruitless arguments while convincing onlookers over thousands of different social media accounts at once.

Prof Michael Wooldridge, director of foundation AI research at the UKs Alan Turing Institute, said AI-powered disinformation was his main concern about the technology.

Right now in terms of my worries for AI, it is number one on the list. We have elections coming up in the UK and the US and we know social media is an incredibly powerful conduit for misinformation. But we now know that generative AI can produce disinformation on an industrial scale, he said.

Wooldridge said chatbots such as ChatGPT could produce tailored disinformation targeted at, for instance, a Conservative voter in the home counties, a Labour voter in a metropolitan area, or a Republican supporter in the midwest.

Its an afternoons work for somebody with a bit of programming experience to create fake identities and just start generating these fake news stories, he said.

After fake pictures of Donald Trump being arrested in New York went viral in March, shortly before eye-catching AI generated images of Pope Francis in a Balenciaga puffer jacket spread even further, others expressed concern about generated imagery being used to confused and misinform. But, Altman told the US Senators, those concerns could be overblown.

Photoshop came on to the scene a long time ago and for a while people were really quite fooled by Photoshopped images then pretty quickly developed an understanding that images might be Photoshopped.

But as AI capabilities become more and more advanced, there are concerns it is becoming increasingly difficult to believe anything we encounter online, whether it is misinformation, when a falsehood is spread mistakenly, or disinformation, where a fake narrative is generated and distributed on purpose.

Voice cloning, for instance, came to prominence in January after the emergence of a doctored video of the US president, Joe Biden, in which footage of him talking about sending tanks to Ukraine was transformed via voice simulation technology into an attack on transgender people and was shared on social media.

A tool developed by the US firm ElevenLabs was used to create the fake version. The viral nature of the clip helped spur other spoofs, including one of Bill Gates purportedly saying the Covid-19 vaccine causes Aids. ElevenLabs, which admitted in January it was seeing an increasing number of voice cloning misuse cases, has since toughened its safeguards against vexatious use of its technology.

Recorded Future, a US cybersecurity firm, said rogue actors could be found selling voice cloning services online, including the ability to clone voices of corporate executives and public figures.

Alexander Leslie, a Recorded Future analyst, said the technology would only improve and become more widely available in the run-up to the US presidential election, giving the tech industry and governments a window to act now.

Without widespread education and awareness this could become a real threat vector as we head into the presidential election, said Leslie.

A study by NewsGuard, a US organisation that monitors misinformation and disinformation, tested the model behind the latest version of ChatGPT by prompting it to generate 100 examples of false news narratives, out of approximately 1,300 commonly used fake news fingerprints.

NewsGuard found that it could generate all 100 examples as asked, including Russia and its allies were not responsible for the crash of Malaysia Airlines flight MH17 in Ukraine. A test of Googles Bard chatbot found that it could produce 76 such narratives.

NewsGuard also announced on Friday that the number of AI-generated news and information websites it was aware of had more than doubled in two weeks to 125.

Steven Brill, NewsGuards co-CEO, said he was concerned that rogue actors could harness chatbot technology to mass-produce variations of fake stories. The danger is someone using it deliberately to pump out these false narratives, he said.

{{topLeft}}

{{bottomLeft}}

{{topRight}}

{{bottomRight}}

{{.}}

See original here:

Elections in UK and US at risk from AI-driven disinformation, say experts - The Guardian

Read More..

AI-Driven Robots Have Started Changing Tires In The U.S. In Half The Time As Humans – CarScoops

If youre worried about a robot uprising powered by the invisible hand of artificial intelligence, we have some bad news for you the machines are coming for your wheels. The latest innovation in tire-changing tech comes from Michigan-based RoboTire.

The robot can change a set of four wheels in 23 minutes.Thats an age compared to what you might expect from a Formula 1 pitstop, but, according to the company, its twice as fast as a human under normal operating conditions (read: not a team of mechanics in motorsport).

The RoboTire uses two six-axis arms, one for each side of a car. The arms are the same kind that automakers use on assembly lines, with the ability to do heavy lifting. The system is powered by AI, which uses cameras to scan the wheel of a car, and notes the location of the wheel and its bolt pattern. Once scanned, the arms in-built torque wrench will individually unbolt each lug nut, and the arm will then grab the wheel and remove it before refitting a new wheel or a freshly changed tire.

Related: Pirellis General Manager of Operations Touts The Next Great Tire Technology

Currently, human supervision is required for the whole process, and you still need a technician to take the dismounted wheel to a tire-changing machine. But the information that the RoboTire has garnered from its cameras such as wheel size and tire type is automatically relayed to the tire changing machine to save time.

Thanks to AI, the machine is always said to be learning. That means that no matter what size wheel or bolt pattern you bring in, itll be able to figure it out. It can even work if the wheel is caked in mud or snow, so long as the edge of the lug nuts is visible. There are four stores with RoboTire machines in operation, and theyre all connected. This hivemind enables the robots to get faster as they train on differing wheels and sizes.

Its easy to see some advantages, with the labor-intensive elements of the tire change, such as lifting a heavy wheel off and on a car now no longer a problem for mechanics. However, Fox News Digital reports that the eventual goal for the RoboTire system will be a fully-autonomous solution.

More: U.S. Drivers Growing Dissatisfied With Aftermarket Service, See How Providers Scored

Could this be the beginning of the end for tire shop techs? The company seems to downplay the risk of impacting jobs, with its website suggesting that the system will make work safer for technicians. But, you have to wonder whats in store for the future of tire-changing.

In fact, the owner of Creamery Tire in Pennsylvania, Rich Shainline, remarked that their RoboTire robot has helped the company address the ongoing labor shortage. Our big thing is, we have to move product, and I can put one guy on it instead of two, Shainline said.

Although pricing hasnt been made publicly available, RoboTire expects most operators to see payback within a year, taking into account increased productivity and reduced labor costs.

If youre worried about a robot uprising powered by the invisible hand of artificial intelligence, we have some bad news for you the machines are coming for your wheels. The latest innovation in tire-changing tech comes from Michigan-based RoboTire. " [1]=> string(399) "

The robot can change a set of four wheels in 23 minutes.Thats an age compared to what you might expect from a Formula 1 pitstop, but, according to the company, its twice as fast as a human under normal operating conditions (read: not a team of mechanics in motorsport)." [2]=> string(560) "

The RoboTire uses two six-axis arms, one for each side of a car. The arms are the same kind that automakers use on assembly lines, with the ability to do heavy lifting. The system is powered by AI, which uses cameras to scan the wheel of a car, and notes the location of the wheel and its bolt pattern. Once scanned, the arms in-built torque wrench will individually unbolt each lug nut, and the arm will then grab the wheel and remove it before refitting a new wheel or a freshly changed tire. " [3]=> string(236) "

Related: Pirellis General Manager of Operations Touts The Next Great Tire Technology" [4]=> string(2085) " Robotires

Currently, human supervision is required for the whole process, and you still need a technician to take the dismounted wheel to a tire-changing machine. But the information that the RoboTire has garnered from its cameras such as wheel size and tire type is automatically relayed to the tire changing machine to save time. " [5]=> string(524) "

Thanks to AI, the machine is always said to be learning. That means that no matter what size wheel or bolt pattern you bring in, itll be able to figure it out. It can even work if the wheel is caked in mud or snow, so long as the edge of the lug nuts is visible. There are four stores with RoboTire machines in operation, and theyre all connected. This hivemind enables the robots to get faster as they train on differing wheels and sizes. " [6]=> string(407) "

Its easy to see some advantages, with the labor-intensive elements of the tire change, such as lifting a heavy wheel off and on a car now no longer a problem for mechanics. However, Fox News Digital reports that the eventual goal for the RoboTire system will be a fully-autonomous solution. " [7]=> string(249) "

More: U.S. Drivers Growing Dissatisfied With Aftermarket Service, See How Providers Scored" [8]=> string(2034) "

Could this be the beginning of the end for tire shop techs? The company seems to downplay the risk of impacting jobs, with its website suggesting that the system will make work safer for technicians. But, you have to wonder whats in store for the future of tire-changing. " [9]=> string(287) "

In fact, the owner of Creamery Tire in Pennsylvania, Rich Shainline, remarked that their RoboTire robot has helped the company address the ongoing labor shortage. Our big thing is, we have to move product, and I can put one guy on it instead of two, Shainline said. " [10]=> string(197) "

Although pricing hasnt been made publicly available, RoboTire expects most operators to see payback within a year, taking into account increased productivity and reduced labor costs. " [11]=> string(446) "

" [12]=> string(7) "

" [13]=> string(1) ""}

See more here:

AI-Driven Robots Have Started Changing Tires In The U.S. In Half The Time As Humans - CarScoops

Read More..

CNET Published AI-Generated Stories. Then Its Staff Pushed Back – WIRED

In November, venerable tech outlet CNET began publishing articles generated by artificial intelligence, on topics such as personal finance, that proved to beriddled with errors. Today the human members of its editorial staff have unionized, calling on their bosses to provide better conditions for workers and more transparency and accountability around the use of AI.

In this time of instability, our diverse content teams need industry-standard job protections, fair compensation, editorial independence, and a voice in the decisionmaking process, especially as automated technology threatens our jobs and reputations, reads the mission statement of the CNET Media Workers Union, whose more than 100 members include writers, editors, video producers, and other content creators.

While the organizing effort started before CNET management began its AI rollout, its employees could become one of the first unions to force its bosses to set guardrails around the use of content produced by generative AI services like ChatGPT. Any agreement struck with CNETs parent company, Red Ventures, could help set a precedent for how companies approach the technology. Multiple digital media outlets have recently slashed staff, with some likeBuzzFeed andSports Illustrated at the same time embracing AI-generated content. Red Ventures did not immediately respond to a request for comment.

In Hollywood, AI-generated writing has prompted a worker uprising.Striking screenwriters want studios to agree to prohibit AI authorship and to never ask writers to adapt AI-generated scripts. The Alliance of Motion Picture and Television Producers rejected that proposal, instead offering to hold annual meetings to discuss technological advancements. The screenwriters and CNETs staff are both represented by the Writers Guild of America.

While CNET bills itself as your guide to a better future, the 30-year-old publication late last year stumbled clumsily into the new world ofgenerative AI that cancreate textor images. In January, the science and tech website Futurismrevealed that in November, CNET had quietly started publishing AI-authored explainers such as What Is Zelle and How Does it Work? The stories ran under the byline CNET Money Staff, and readers had to hover their cursor over it to learn that the articles had been written using automation technology.

A torrent of embarrassing disclosures followed. The Vergereported that more than half of the AI-generated stories contained factual errors, leading CNET to issuesometimeslengthy corrections on 41 out of its 77 bot-written articles. The tool that editors used also appeared to haveplagiarized work from competing news outlets, as generative AI iswont to do.

Then-editor-in-chief Connie Guglielmo laterwrote that a plagiarism-detection tool had been misused or failed and that the site was developing additional checks. One former stafferdemanded that her byline be excised from the site, concerned that AI would be used to update her stories in an effort to lure more traffic from Google search results.

In response to the negative attention to CNETs AI project, Guglielmo published anarticle saying that the outlet had been testing an internally designed AI engine and that AI engines, like humans, make mistakes. Nonetheless, she vowed to make some changes to the sites disclosure and citation policies and forge ahead with its experiment in robot authorship. In March, she stepped down from her role as editor in chief and now heads up the outlets AI edit strategy.

Follow this link:

CNET Published AI-Generated Stories. Then Its Staff Pushed Back - WIRED

Read More..

A Wharton professor says AI is like an ‘intern’ who ‘lies a little bit’ to make their bosses happy – Yahoo Finance

Ethan Mollick, a professor at University of Pennsylvania's Wharton School of Business, compares AI to an intern who "lies a little bit," according to CBS News.Getty Images

UPenn professor Ethan Mollick compares AI to an "intern" who "lies a little bit," CBS reports.

Like interns, AI tools require guidance for their outputs to be useful, according to Mollick.

His thoughts on AI come as users adopt tools like ChatGPT to make their work and lives easier.

AI can be more than just your assistant it can also be an employer's intern, says one professor.

Ethan Mollick, a professor at University of Pennsylvania's Wharton School of Business, said that AI tools can be "good for a lot of things" despite its tendency to make factual errors. But that's not so different from humans, especially those who are new to the job market, he said.

"It's almost best to think about it as a person like an intern you have working for you," Mollick told CBS News in an interview this week when asked about AI's usefulness and limitations.

Similar to interns who may overcompensate to get ahead of the curve, Mollick compares AI to an "infinite intern" who "lies a little bit" and, at times, wants to make their bosses "a little happy."

Writing emails, Mollick says, is one way AI can be used to "help you overcome blockages in your every day life" and become "a better and more productive writer."

But like interns, AI requires guidance for its outputs to be useful.

"It's actually very useful across a wide variety of tasks, but not on its own," Mollick says. "You need to help it out."

When Insider reached out for comment, Mollick referred to his previous blog post that echoes the sentiment.

"I would never expect to send out an intern's work without checking it over, or at least without having worked with the other person enough to understand that their work did not need checking," Mollick wrote in his blog post. "In the same way, an AI may not be error free, but can save you lots of work by providing a first pass at an annoying task."

Story continues

Mollick's thoughts on AI come as generative AI tools like OpenAI's ChatGPT take the world by storm. As of January, more than 100 million users have flocked to the chatbot, some using it as a personal assistant to make their work and liveseasier.

In fact, Mollick, who teaches a class on entrepreneurship and innovation, requires his students to use ChatGPT to help with their classwork. Still, he recognizes that the chatbot isn't perfect.

"AI will never be as good as the best experts in a field,"Mollick told NPR in an interview. "We still need to teach people to be experts."

Read the original article on Business Insider

Continued here:

A Wharton professor says AI is like an 'intern' who 'lies a little bit' to make their bosses happy - Yahoo Finance

Read More..

A.I. and sharing economy: UBER, DASH can boost profits investing … – CNBC

Artificial intelligence is expected to revolutionize businesses across the globe, and those in the sharing economy are no exception. There is nearly $6 trillion in revenue opportunity from AI across the internet industry, a March report from Morgan Stanley found. The latest AI craze, generative AI , has companies across the country looking to capitalize on the trend. "Every single company faces the challenge today of deciding how to distribute its IT budgets such that it can get enough artificial intelligence to deliver improvement in costs, improvement in revenue operational value and open an avenue to transformation," Gartner analyst Whit Andrews said. "Every single company there is nobody who gets a pass at this point." For companies in the sharing economy, such as Uber , Lyft and DoorDash , AI is already a way of life. People call up a ride or a food order on an app and they are matched with drivers to either take them to their destination or deliver their food. Yet the effect of the technology is just beginning. "UBER/LYFT/DASH already use ML [machine learning] in their matching algorithms (matching rides/eaters with drivers/couriers)," Morgan Stanley wrote in its report. "That said, we see further improvements in fleet utilization and matching lower wait times and pricing and higher profitability." AI tailwind for Uber Uber has both its ride-sharing service and UberEats food delivery business. When the company reported earnings earlier this month, CEO Dara Khosrowshahi said Uber has a "significant data advantage" that allows it to employ AI solutions and is already using AI to predict "highly accurate" arrival times for rides and deliveries. Even still, it's early innings. "We are just starting to understand the capabilities of AI and we are a long way from understanding its potential," Khosrowshahi told CNBC after the earnings report. UBER mountain 2019-05-10 Uber's performance since is May 10, 2019 IPO The earliest and most significant effect of AI will be on its developer productivity, the company said on its earnings call. You'll also have more chatbots powering experiences, which saves on costs. "Then we will look to surprise and delight. 'Pick me up at the airport. I'm arriving on American flight 260 on Tuesday,'" Khosrowshahi said. "We will know who you are, where your home is, what kind of cars you like, et cetera." According to Morgan Stanley, AI and machine learning will be a tailwind to network efficiency. On the rider side, every 1% increase in rider frequency and 20 basis points increase in the rides take rate would lead to 1% incremental company revenue and 3% of incremental company earnings before interest, taxes, depreciation and amortization. On the delivery side, every 1% increase in rider frequency and five basis point increase in the rides take rate would lead to 0.4% incremental company revenue and 1% of incremental company EBITDA. For investor Sarat Sethi, who owns shares of Uber, the company's use of the technology helped them become efficient and puts them ahead of the competition. "They were really on the forefront," said Sethi, portfolio manager at Douglas C. Lane & Associates. "Now we've seen the results over the last few quarters, where the efficiencies are really coming through. And Uber is just understanding more and more of the customer and the more and more data they get." Tech investor Gene Munster, partner at Deepwater Asset Management, is also bullish on Uber's ride-sharing and UberEats business because he believes the company has persistent growth. One of the reasons he's really excited about its AI prospects is the autonomous deliveries and transportation opportunities. He sees a move toward autonomy, although not the entire fleet, which brings the potential for higher margins. He also thinks customers will be willing to book autonomous cars if it saves them money. "Autonomy will drive down the cost per mile for the customer, which will increase use, but it should increase margin at the same time, which is pretty unique," said Munster, whose firm owns shares of Uber. AI's effect on the sharing economy There are several ways AI can boost ridership or food delivery orders for those in the sharing economy. More accurate natural language processing could help with search and help create better recommendations for users, Morgan Stanley said. AI will also be able to better anticipate consumer behavior. "For the rideshare businesses, this could take the form of better allocation of supply to meet rider demand as algorithms are better able to predict where influx of demand will next occur and point drivers in that direction," the firm said. For the delivery businesses, it could mean better suggestions or automatic orders for groceries. AI may also help in the generation of new business opportunities, particularly in delivery, Morgan Stanley's analysts said. "Greater ability to predict customer behavior potentially minimizes the initial capital investment risk for companies looking to build their own supply and adds certainty that there will be demand for the product once created," they said. As AI gets smarter, it can also help boost productivity and automate tasks now performed by humans. What every company in the sharing economy is trying to sort out now is why people are involved with certain tasks, Gartner's Andrews said. "You have to be able to answer the question. You have to be able to say there are people involved with this because it demands the creativity and the originality of human perspective," he said. "If it lacks that, it's going to get automated. We are in the process of taking this enormous step toward that new reality." Companies also have to continue investing in the newest technology or risk being left behind and it's not cheap. "The companies have tremendous opportunity to evolve their model. Now it is just about execution," said Baird technology strategist Ted Mortonson. DoorDash's AI experiments Similar to its sharing economy peer Uber, DoorDash is also hoping to optimize its operations and improve productivity with AI in the near term, said Rohit Kulkarni, senior research analyst at Roth MKM. Looking ahead, it's about creating a better consumer experience by using generative AI, he said. "What DoorDash can do is better content discovery, which goes back to consumer experiences and how AI can put the right content in front of the right consumer at the right time," said Kulkarni, who has a neutral rating on the stock. His $72 price target implies 10% upside from Wednesday's close. In fact, right now, DoorDash is running different experiments internally with generative AI, said Alok Gupta, the company's head of artificial intelligence and machine learning. "One of the things we're trying to understand with this new wave of generative AI tools is which ones are going to best serve the needs of certain features," he said. The company is looking at the quality of the tools, including if it gives the right answers, the price, the scalability and how it affects data privacy, security and ethics. "We're looking at the different generative AI vendors, we're looking at open-source models that we can host internally, and then we'll pick and choose," Gupta said. DASH mountain 2020-12-09 DoorDash's performance since its Dec. 9, 2020 IPO DoorDash already uses AI and machine learning to personalize the experience for consumers, help merchants achieve their sales goals by seeing which items are trending in their neighborhoods and to refine the timing of pickups for the drivers, or dashers. Generative AI will be able to further personalize and tailor the experience for users. Customers would see menus that better match their preferences and the interaction would be more conversational, Gupta said. It will also help with internal productivity, such as digitizing menu items, where to park and store locations. While Gupta can't quantify the financial effect for the company, he said AI will drive growth. "We strongly believe that if we improve the quality predictability of the experience of each of our audiences, that will naturally translate into better retention for audiences and how they use us, and that will help us," he said. However, Morgan Stanley estimates every 1% improvement in order frequency and five basis points improvement in take rate results in a $149 million, or 1.4%, uplift to company revenue and an $82 million, or 5%, improvement in EBITDA. "The extent to which AI drives substantial improvements in top-line growth could lead to teens upside [for the stock]," Morgan Stanley said. Lyft's difficulties Lyft has been struggling and losing market share to Uber. The ride-sharing company debuted on the public market in 2019 at $72 a share and is now trading below $10. Over the past year, its stock has dropped nearly 58%, plagued by disappointing earnings . Earlier this month, Lyft reported an adjusted loss of 7 cents per share for the first quarter, a penny more than expected, according to Refinitiv. The company also provided guidance for second-quarter sales and EBITDA that was less than expected. However, Lyft's new CEO, David Risher, is trying to make changes to right the ship, including layoffs and the launch of a new airport preorder feature. 'Biggest death star' As companies within the ride-sharing economy look to invest in the latest AI and use it to become more profitable, there may be some big competitors looking to swoop in, Baird's Mortonson said. The "cloud titans" that have massive balance sheets and free cash flow, as well as the "massive compute scale," could decide to move into the ride-sharing or food delivery business, he said. "Their biggest death star is Amazon ," Mortonson said. Not only does the e-commerce giant have the intellectual property that centers around AWS, but it also knows all about the next-generation logistics, routing and delivery, he said. "Their extension on delivery into food or other services they just have to turn the switch," he said. CNBC's Michael Bloom contributed reporting.

See the original post:

A.I. and sharing economy: UBER, DASH can boost profits investing ... - CNBC

Read More..

Google plans to use new A.I. models for ads and to help YouTube creators, sources say – CNBC

Google CEO Sundar Pichai speaks on-stage during the Google I/O keynote session at the Google Developers Conference in Mountain View, California, on May 10, 2023.

Josh Edelson | AFP | Getty Images

Google's effort to rapidly add new artificial intelligence technology into its core products is making its way into the advertising world, CNBC has learned.

The company has given the green light to plans for using generative AI, fueled by large language models (LLMs), to automate advertising and ad-supported consumer services, according to internal documents.

Last week, Google unveiled PaLM 2, its latest and most powerful LLM, trained on reams of text data that can come up with human-like responses to questions and commands. Certain groups within Google are now planning to use PaLM 2-powered tools to allow advertisers to generate their own media assets and to suggest videos for YouTube creators to make, documents show.

Google has also been testing PaLM 2 for YouTube youth content for things like titles, and descriptions. For creators, the company has been using the technology to experiment with the idea of providing five video ideas based on topics that appear relevant.

With the AI chatbot craze speedily racing across the tech industry and capturing the fascination of Wall Street, Google and its peers, including Microsoft, Meta and Amazon, are rushing to embed their most sophisticated models in as many products as possible. The urgency has been particularly acute at Google since the public launch late last year of Microsoft-backed OpenAI's ChatGPT raised concern that the future of internet search was suddenly up for grabs.

Meanwhile, Google has been mired in a multi-quarter stretch of muted revenue growth after almost two decades of consistent and rapid expansion. With fears of a recession building since last year, advertisers have been reeling in online marketing budgets, wreaking havoc on Google,Facebookand others. Specific to Google, paid search advertising conversion rates have decreased this year across most industries.

Beyond search, email and spreadsheets, Google wants to use generative AI offerings to increase spending to boost revenue and improve margins, according to the documents. An AI-powered customer support strategy could potentially run across more than 100 Google products, including, Google Play Store, Gmail, Android Search and Maps, the documents show.

Automated support chatbots could provide specific answers through simple, clear sentences and allow for follow-up questions to be asked before suggesting an advertising plan that would best suit an inquiring customer.

A Google spokesperson declined to comment.

Google recently offered Google Duet and Chat assistance,allowing people to use simple natural language to get answers on cloud-related questions, such as how to use certain cloud services or functions, or to get detailed implementation plans for their projects.

Google is also working on its own internal Stable Diffusion-like product for image creation, according to the documents. Stable Diffusion's technology, similar to OpenAI's DALL-E, can quickly render images in various styles with text-based direction from the user.

Google's plan to push its latest AI models into advertising isn't a surprise. Last week, Facebook parent Meta unveiled the AI Sandbox, a "testing playground" for advertisers to try out new generative AI-powered ad tools. The company also announced updates to Meta Advantage, its portfolio of automated tools and products that advertisers can use to enhance their campaigns.

On May 23, Google will be introducing new technologies for advertisers at its annual event, Google Marketing Live. The company hasn't offered specifics about what it will be announcing, but it's made clear that AI will be a central theme.

"You'll discover how our AI-powered ads solutions can help multiply your marketing expertise and drive powerful business results in today's changing economy," the website for the event says.

WATCH: AI takes center stage at Google I/O

Read this article:

Google plans to use new A.I. models for ads and to help YouTube creators, sources say - CNBC

Read More..