Page 829«..1020..828829830831..840850..»

EU considering whether to attend Britain’s AI summit, spokesperson … – Reuters

AI (Artificial Intelligence) letters are placed on computer motherboard in this illustration taken June 23, 2023. REUTERS/Dado Ruvic/Illustration/File Photo Acquire Licensing Rights

LONDON, Sept 22 (Reuters) - The European Union is considering whether to send officials to Britain's upcoming artificial intelligence safety summit, a spokesperson told Reuters, as the bloc nears completion of wide-ranging AI legislation that is the first of its kind globally.

British Prime Minister Rishi Sunak is set to host the summit in November bringing together governments, tech companies and academics to discuss the risks posed by the technology.

But the invitee list has been kept under wraps, with some companies declining to say whether they have been invited.

European Commission Vice President Vera Jourova has received a formal invitation to the summit, the spokesperson said, adding: "We are now reflecting on potential EU participation."

AI has seen rapid growth in investment and consumer popularity since the release of OpenAI's ChatGPT chatbot.

While Sunak hopes to position Britain as the global leader in regulating the rapidly developing technology, the EU is close to rolling out its own AI Act, the first such legislation in the world.

Under the bloc's incoming rules, it is expected that organisations using AI systems the bloc deems high risk will have to log their activities, complete rigorous risk assessments and make some internal data available to authorities.

However, the Financial Times reported that British government officials favour a less "draconian" approach to AI regulation than the EU.

Tech expert Matt Clifford and former senior diplomat Jonathan Black have been appointed to lead preparations for the summit. Last month, Clifford told Reuters he hoped the summit would set the tone for future international debates on AI regulation.

While a number of world leaders, including U.S. Vice President Kamala Harris, are expected to attend the summit, it largely remains unknown who else has been invited -- or who has accepted an invitation.

The British government was recently forced to defend its decision to invite China to the summit.

The country's finance minister Jeremy Hunt told Politico: "If you're trying to create structures that make AI something that overall is a net benefit to humanity, then you cant just ignore the second-biggest economy in the world."

Reporting by Martin Coulter; Editing by Hugh Lawson

Our Standards: The Thomson Reuters Trust Principles.

Read the original here:

EU considering whether to attend Britain's AI summit, spokesperson ... - Reuters

Read More..

Is It Time to Move On From C3.ai Stock? – The Motley Fool

The rising popularity of artificial intelligence (AI) has caught the attention of just about everybody on Wall Street. Analysts at financial institutions are attending seminars, webinars, and everything in between to try and get a grasp on which companies are making inroads in AI versus those that might be using the new buzzword as a means to attract attention.

While Big Tech firms such as Alphabet, Microsoft, Amazon, and Nvidia are already looking like the leaders of the AI pack, other growth companies, including Palantir, MongoDB, and ServiceNow, should not be overlooked. In the midst of these names sits one tech company that, from my purview, should be left behind: C3.ai (AI -4.44%).

Given its namesake and ticker symbol, C3.ai briefly experienced some meme trading activity during the early days of AI hype. However, after a couple of mundane earnings reports, savvy investors appear to be catching on to the company's lack of prospects. Let's dig in and analyze why it may be time to move on from C3.ai and look elsewhere.

C3.ai is a software-as-a-service (SaaS) company that generates revenue from two sources: subscriptions and professional services. Subscriptions represent recurring software licenses, and therefore growth in this category garners more intense scrutiny from investors.

For its fiscal first quarter ended July 31, C3.ai reported $72.4 million in total revenue. While this was at the higher end of the company's previously issued guidance, growth from subscriptions only grew 8% year over year in the quarter.

Moreover, roughly two-thirds of fiscal Q1 bookings stemmed from the Federal Defense and Aerospace sectors. To put this into context, one of C3.ai's biggest competitors is Palantir. One of Wall Street's most bearish stances on Palantir is that the company historically relied on large, lumpy government contracts. However, over the last couple of years, Palantir has showcased its ability to penetrate the private sector.

Image Source: Getty Images

When it comes to AI, I do not believe there is a one-size-fits-all solution. Depending on the needs and use cases of the client, a multi-pronged approach may be best. For example, advertisers who are seeking sophisticated tools to measure the strength of a marketing campaign may find that Alphabet offers a suite of products that can achieve this goal. On the other hand, an enterprise that relies heavily on ChatGPT might find that Microsoft offers the best combination of services for its needs. Whatever the case, Big Tech will likely be a central component of a company's AI roadmap.

On the other hand, some smaller companies are showing just how prolific generative AI capabilities can be. As mentioned above, Palantir is witnessing surging demand fueled by its latest AI tool, released earlier this year. For this reason, the company's stock has experienced some new life and brought it within reasonable valuation levels of other growth equities.

Yet perhaps the biggest beneficiary of AI is Nvidia. The company's data center products and highly coveted semiconductor chips have Nvidia positioned at the nucleus of AI. Moreover, many of these companies generate billions of dollars of positive free cash flow from Big Tech to growth stocks. However, C3.ai is still unprofitable (reporting net losses) on a GAAP basis and is not yet generating free cash flow.

Although C3.ai is growing its top line, it simply is not doing so at a rate that challenges the competition. Moreover, as the company continues to burn cash, the competitive landscape will likely further separate itself as many other AI players can fund growth with existing cash flow.

Image Source: Company Filings and The Motley Fool

As of the time of this article, C3.ai stock is up 130% year to date. While this looks like a huge victory, it's a bit misleading.

Since its initial public offering (IPO) in 2020, C3.ai stock is down almost 80%. Furthermore, since reporting earnings earlier this month, this stock is down 14%.

During the latest earnings season, investors got a front-row seat into the AI endeavors of several names in the tech sector. It's become clear that Big Tech is not messing around and that the behemoths are willing to invest billions into this new frontier. Additionally, some smaller players are emerging as potential leaders in the AI arms race and doing so impressively by maintaining profit margins.

From my standpoint, C3.ai sits at the other end of the spectrum. The company's revenue growth is lackluster, and it is struggling to build a profitable operation. Until the company can prove, as Palantir did, that it's more than a glorified government contractor, there is very little to like about this stock.

I suggest staying away from any momentum that may enter the price, as I believe more hype and shenanigans than the underlying fundamentals of the business drive it. If you currently hold the stock, I think now is a good time to exit and prevent yourself from becoming a bag holder. Should you incur a loss, the silver lining is that you can use these for tax harvesting purposes. If you are still looking for AI exposure, any capital you can recoup is likely better off reinvesting in other stocks, such as those mentioned throughout this article.

John Mackey, former CEO of Whole Foods Market, an Amazon subsidiary, is a member of The Motley Fool's board of directors. Suzanne Frey, an executive at Alphabet, is a member of The Motley Fool's board of directors. Adam Spatacco has positions in Alphabet, Amazon.com, Microsoft, Nvidia, and Palantir Technologies. The Motley Fool has positions in and recommends Alphabet, Amazon.com, Microsoft, MongoDB, Nvidia, Palantir Technologies, and ServiceNow. The Motley Fool recommends C3.ai. The Motley Fool has a disclosure policy.

Read more:

Is It Time to Move On From C3.ai Stock? - The Motley Fool

Read More..

Pioneer of ‘mind-reading’ AI to open Maury Strauss Distinguished … – Virginia Tech

Once the fodder of science fiction, mind-reading artificial intelligence (AI) is no longer far-fetched its already here. And researchers like Virginia Techs very ownRead Montaguehave spent decades building it.

What started as a backwater movement in the 80s is now a revolution with untold potential, said Montague, the Virginia Tech Carilion Vernon Mountcastle Research Professor and director of theCenter for Human Neuroscience Researchat theFralin Biomedical Research Institute at VTC.

Montague is among the worlds top neuroscientists who have long deployed machine learning tools to decode and predict complex human behaviors and neural signaling that support them.

Now he's lifting the lid on what hes learned over 30 years as a frontrunner in computational psychiatry neuroscience. He will explore the history of machine learning in neuroscience and his research in his talk,Machine Learning and Human Thought, at 5:30 p.m. Sept. 28 at the research institute.

Montague's research has spanned the neural basis of risky decision-making,confirmation bias,risk-reward analysis,mentalstates during the simulated commission of a crime,impulsiveness, andpolitical ideologies.

His group was the first toobserve nanoscalevariations in brain chemicals in awake humans in agroundbreaking 2011 study. Montague later discovered howdopamine and serotoninjointly underpin sensory processing and human perception in 2016, 2018, and 2020.

With collaborator and fellow Fralin Biomedical Research Institute professorStephen LaConte, Montague established one of the world's first labs applyingoptically pumped magnetometry, a breakthrough brain imaging technique, to parse the intricacies of social interaction.

He was invited by his former mentor and colleagueMichael Friedlander, executive director of the Fralin Biomedical Research Institute and Virginia Techs vice president for health sciences and technology, to present the institutes 116thMaury Strauss Distinguished Public Lecture, debuting the2023-34 series.

Dr. Montagues contributions to neuroscience have enriched our understanding of the brain and paved the way for a new era of scientific exploration, Friedlander said. I cant think of a better thought leader to share prescient insights about the impact of machine learning on brain research until now and what the future might hold. Its an honor to share one of our own highly regarded scientists with our community.

Today, Montagues peers and students revere his vanguard intersectional approaches to studying the brain. Over the years, hes collaborated with economists, physicists, neurosurgeons, lawyers, and psychologists to explore novel scientific questions.

Unlike many neuroscientists who started as biologists, psychologists, or chemists, Montague was a mathematician at Auburn University before completing a doctoral degree in physiology and biophysics at the University of Alabama at Birmingham in Friedlanders lab.

I was a senior in college when I read apaperby Geoffrey Hinton the father of AI and Terry Sejnowski, describing the very first learning algorithms for Boltzmann machines. That study galvanized my interest in neural networks, and from that point, I was set on working in Terrys lab, Montague said.

And thats what he did. But it took a while to get there.

Montague completed a theoretical neurobiology fellowship sponsored by Nobel Laureate Gerald Edelman at Rockefeller Universitys Neurosciences Institute and later joined joined Sejnowskis Howard Hughes Medical Institutes Computational Neurobiology Lab at the Salk Institute.

In collaboration with Peter Dayan, director of the Max Planck Institute for Biological Cybernetics in Germany, Montague proposed a reinforcement learning model of the meaning of dopamine signaling in the brain a model that is now seen as a signature breakthrough for computational models that yield new insights into brain function.

Even back then, Dr. Montague was pushing boundaries. He was among the first to apply machine-learning models to interpret vast amounts of fMRI data. He was among the first to measure neurochemical levels in awake humans using machine-learning enhanced cyclic voltammetry. And hes now among the first to encode brains magnetic waves with unprecedented resolution, opening up new ways to visualize brain activity. The sheer scope of his research is remarkable, Friedlander said.

Montague has published about 140 scientific papers in high-impact journals, accumulating over 42,000 citations. Recently, his research operation has been awarded a new $3 million award for computational neurochemistry work in conscious humans, and the group currently maintains four active National Institutes of Health grants in addition to two projects recently funded by the Red Gates Foundation as part of alandmark $50 million giftto the institute earlier this month.

Before joining Virginia Tech in 2010, Montague was the Brown Foundation Professor of Neuroscience and Psychiatry at Baylor College of Medicine, where he founded and directed the Human Neuroimaging Lab.

In addition to his primary appointment at the Fralin Biomedical Research Institute, Montague is a professor with theCollege of Sciences physics department andVirginia Tech Carilion School of Medicines psychiatry and behavioral medicine department.

Last year, Montague presented aNobel Mini-symposium lecturehosted by the Nobel Assembly in Stockholm and focused on his early modeling work of the dopamine system. In 2018, he gave theDorcas Cummings Memorial Lectureat Cold Springs Harbor Laboratory, and in 2012, he delivered aTEDGlobaltalk in Edinburgh.

He is an honorary professor with the Wellcome Centre for Human Neuroimaging at University College London and was a Wellcome Trust Principal Research Fellow from 2011-18. He formerly was a member of the MacArthur Foundation Research Network on Law and Neuroscience and the Institute for Advanced Study in Princeton. He received the Walter Gilbert Award from Auburn University and the William R. and Irene D. Miller Lectureship from Cold Spring Harbor Laboratory in 2011 and was awarded the Michael E. DeBakey Excellence in Research Award in 1997 and 2005.

Beyond his technological innovations, Montague has opened a critical window into human behavior in health and disease with his key role in developing the temporal difference prediction reward hypothesis, Friedlander said. This concept has now been directly tested in the living brain, is foundational to modern neuroscience, and provides deep insights into previously unrecognized behavior. In addition to providing insights into human brain health, this hypothesis has been validated by Montague and his team in providing a deeper understanding from an evolutionary biology perspective into behaviors such as how honey bees process and share essential information about nectar sources with their conspecifics.

The institutes free public lecture series is made possible by Maury Strauss, a longtime Roanoke businessman and benefactor who recognizes the importance of bringing leading biomedical research scientists to the community.

The public is welcome to attend the lecture, preceded by a 5 p.m. reception in the Fralin Biomedical Research Institute at2 Riverside Circle in Roanoke. Montagues talk will be streamed live viaZoom and archived on the instituteswebsite.

More:

Pioneer of 'mind-reading' AI to open Maury Strauss Distinguished ... - Virginia Tech

Read More..

Google expects no change in its relationship with AI chip supplier … – Reuters

A smartphone with a displayed Broadcom logo is placed on a computer motherboard in this illustration taken March 6, 2023. REUTERS/Dado Ruvic/Illustration/File Photo Acquire Licensing Rights

Sept 21 (Reuters) - Alphabet's (GOOGL.O) Google said on Thursday it does not see any change in its relationship with Broadcom (AVGO.O) following a media report the tech giant considered dropping the chipmaker as a supplier of artificial intelligence chips as early as 2027.

Broadcom shares pared losses after falling as much as 4.3% following The Information's report Google will design the chips - called tensor processing units - in-house if it decided to go ahead with the plan and could potentially save billions of dollars in costs annually.

Google has been ramping up chip investments this year as it plays catch-up with Microsoft (MSFT.O) for domination of the booming market for generative AI applications such as ChatGPT

The report had said Google's deliberations came about after a standoff between the company and Broadcom over the price of the TPU chips, and that Google has also been working to replace Broadcom with Marvell Technology (MRVL.O) as the supplier of chips that glue their servers together.

"Our work to meet our internal and external Cloud needs benefit from our collaboration with Broadcom; they have been an excellent partner and we see no change in our engagement," a Google spokesperson said.

Shares of Marvell, which declined to comment, reversed course and were down 1.3%.

Broadcom did not respond to a Reuters request for comment.

Broadcom is seen as the second-biggest winner from the generative AI boom after Nvidia (NVDA.O). CEO Hock Tan had predicted in June the technology could account for more than a quarter of the company's semiconductor revenue next year.

In May, J.P. Morgan analysts estimated Broadcom could get $3 billion in revenue from Google this year after a "recent order acceleration" by the company for its TPU processors.

Google co-designs its AI chips with Broadcom and tech giant has already lined up the semiconductor firm for its sixth generation processor, the analysts said. They added Broadcom also works with Meta Platforms (META.O) on the social media giant's custom chips.

Big technology companies from Microsoft to Amazon.com (AMZN.O) have in recent years rushed to develop custom chips that help them save on costs and are suited to their specific workloads.

That push has accelerated this year after prices surged for Nvidia's H100, the chip that powers most generative AI apps, to nearly double its original cost of $20,000.

Reporting by Kanjyik Ghosh and Aditya Soni in Bengaluru; Additional Reporting by Chavi Mehta and Jaspreet Singh; Editing by Savio D'Souza, Nivedita Bhattacharjee and Krishna Chandra Eluri

Our Standards: The Thomson Reuters Trust Principles.

Continued here:

Google expects no change in its relationship with AI chip supplier ... - Reuters

Read More..

Taiwan is using generative AI to fight Chinese disinfo – Defense One

Many U.S. observers are waiting in dread for China to attempt a military takeover of Taiwan sometime before 2027, but beneath the threshold of armed conflict, China is already attacking vital Taiwanese information streams, both physically and virtually, while the island develops new tools and techniques to resist.

In April, a Chinese fishing vessel, followed by a cargo ship, dragged their anchors east of Taiwans Matsu islands, severing the two communications cables that link the islands with Taiwan itself, an act of either sabotage or clumsiness that has occurred at least 27 times in just the last five years. Taiwan has said that it suspects the severings were intentional. And of course attacks on commercial and public telecommunications channels are now a common occurrence between adversarial nations, as when Russia attacked the U.S.-based satellite communications company Viasat an hour before Moscow launched its renewed war on Ukraine.

The Taiwanese government took the interruption as an opportunity to help citizens develop workarounds to continuous Chinese-caused, er, service interruptions.

We took that as a chance to not just teach people about, you know, microwave, and also satellite [communications] backup and things like that; we also saw a lot of civil society start learning about how to set up emergency communications when the bandwidth is limited. Audrey Tang, Taiwans minister of digital affairs, told audiences during the Special Competitive Studies Project Summit in Washington, D.C. on Thursday.

But various Chinese-backed actors also regularly target the Taiwanese population with coordinated messaging and influence campaigns. A 2019 report from cybersecurity company Record Future found that the Chinese government employs as many as half a million people to sway opinions on social media at home and around the world.

As Taiwan approaches a pivotal presidential election in January, Tang said that both the government and a wide network of volunteers are preparing for China to increase efforts to manipulate Taiwanese civilians. Taiwanese civil society has developed new organizations to combat it. A group called Cofacts allows users to forward dubious messages to a chatbot. Human editors check the messages, enter them into a database, and get back to the user with a verdict.

The number of volunteers to man the service and others like are few compared to the size of Chinas efforts, said Tang. The people who actually do foreign interference nowadays coordinated with cyber attack have a lot of resources, she said.

Enter generative AI tools such as large language models, which power some of the big breakthrough online AI tools such as ChatGPT. This year, because gen AI is just so mailable, they just fine-tuned a language module together that can clarify such disinformationadding back a context and things like that. So we're no longer outnumbered, she said. It also allows the citizen-run venture to remain as such, as opposed to run by the government, which is important for its credibility. It doesn't need dedicated hardware or resources and can be done on laptops. It is still squarely in the social sector by volunteers, which is the best place to be so that it will not be captured by any state or a capitalist apparatus.

The U.S. intelligence community is also looking at how it can use generative artificial intelligence to raise productivity, Avril Haines, the Director of National Intelligence, said during the summit.

We have a program called the Unity program that's supposed to do just that. It's actually focused on artificial intelligence and it's supposed to help us take the best practices and then scale them and fund the scaling to some extent, Haines said.

But she said the United States has much more work to do to understand both the development ecosystem for such tools that are rooted in private companies and better detect how adversaries will use generative AI to attack the United States.

Haines said One fear, she said, is that new tools like generative AI are so powerful that small nations or non-state actors will be able to use them to great effect to rival the capabilities of much larger, more predictable adversaries like China. The state actors that we typically focus on are also, yes, going to be part of what we're going to be looking at in terms of the threat. But if you've got something that's cheaply available, that's commercially available, you might also see other state actors that typically would not be engaging in these kinds of threats doing so more because it's more available to them, she said.

View original post here:

Taiwan is using generative AI to fight Chinese disinfo - Defense One

Read More..

CNBC Daily Open: Dispelling the AI hallucination – CNBC

Signage for Nvidia Corp. during the Taipei Computex expo in Taipei, Taiwan, on Tuesday, May 30, 2023.

Hwa Cheng | Bloomberg | Getty Images

This report is from today's CNBC Daily Open, our new, international markets newsletter. CNBC Daily Open brings investors up to speed on everything they need to know, no matter where they are. Like what you see? You can subscribehere.

Infectious pessimism U.S. stocks fell for a third consecutive day as Treasury yields continued rising to multiyear highs. The pan-European Stoxx 600 slumped 1.3% amid a flurry of central bank decisions. Sweden hiked rates by 25 basis points to 4%; Norway raised its rate from 4% to 4.25%; Switzerland kept rates unchanged. For more central bank decisions, see below.

A halt and a big hikeThe Bank of England elected to keep interest rates unchanged at its September meeting, breaking a series of 14 straight rate hikes. But the decision wasn't unanimous: Four out of nine members voted for another 25-basis-point hike to 5.5%. In other central bank news, Turkey hiked its interest rate to 30%, a 5-percentage-point jump from 25%.

Securing business and the internetCisco is acquiring Splunk, a cybersecurity software company, for $157 a share in a cash deal. The total deal's worth $28 billion about 13% of Cisco's market capitalization making it the company's largest acquisition ever. Cisco's known for making computer networking equipment, but has been boosting its cybersecurity business recently to grow its revenue stream.

SuccessionRupert Murdoch is stepping down as chairman of the board of Fox Corp and News Corp in November. The 92-year-old will be succeeded by his son Lachlan Murdoch. Fox Corp is the parent company of Fox News, a TV channel embroiled in a $787.5 million settlement this year over false claims that Dominion Voting Systems' machines swayed the 2020 U.S. presidential election.

[PRO] 'Uninvestable' banking sectorSteve Eisman, the investor who called and profited from the subprime mortgage crisis that began in 2007, thinks "the whole bank sector is uninvestable." Silicon Valley Bank collapsed in March this year, sparking panic and causing depositors to withdraw money at other regional banks. But that's not the only risk to banks weighing on Eisman's mind.

Four months after hype over artificial intelligence fired up markets, the rally's starting to look more like a hallucination a confident but false claim AI models are prone to making.

For evidence, look no further than Nvidia, the spark that ignited the whole blaze. Shares of the chipmaker peaked on Aug. 24 and have tumbled 18.4% since. While it's true Nvidia's still up 181% for the entire year, that's 60 percentage points lower than its August peak, when shares were 244% higher.

Microsoft's announcement of a broad rollout of Copilot the company's AI tool to corporate clients didn't stoke excitement. On the contrary, Microsoft shares dipped 0.39% after the company's event. By contrast, recall how share prices popped to a record in May after the company announced the pricing of the Copilot subscription service.

And Arm, which tried to position itself as integral to AI computing, saw its shares descend to Earth after rocketing on the first day of its initial public offering. After dropping almost 1% in extended trading, the share's around $51.60 a piece just 60 cents above its IPO price.

In short, investor interest in AI while still hot in comparison with other sectors looks like it's simmering down.

"The combination of waning retail demand and cautious risk sentiment among institutional investors may pose a substantial risk to the AI sector, potentially heralding a pronounced reversal in the weeks ahead," said Vanda Research's senior vice president Marco Iachini.

Blame the usual suspects for this lukewarm sentiment. Higher-for-longer interest rates and Treasury yields caused by spiking oil prices and a tight labor market. (Initial jobless claims for last week dropped to their lowest level since late January, according to the U.S. Labor Department.)

Against that backdrop, it's unsurprising major indexes had a bad day. The Dow Jones Industrial Average fell 1.08%, the Nasdaq Composite slid 1.82% and the S&P 500 lost 1.64%, the most in a day since March. All three indexes are poised for a losing week, with the tech-heavy Nasdaq the deepest in the red so far.

If it's any comfort, September the worst month for stocks, historically ends in a week. Investors will hope it'll pass like a bad dream, or a banished hallucination.

See original here:

CNBC Daily Open: Dispelling the AI hallucination - CNBC

Read More..

AI is policing the package theft beat for UPS as ‘porch piracy’ surge continues across U.S. – CNBC

A doorbell camera in Chesterfield, Virginia, recently caught a man snatching a box containing a $1,600 new iPad from the arms of a FedEx delivery driver.Barely a day goes by without a similar report. Package theft, often referred to as "porch piracy," is a big crime business.

While the price tag of any single stolen package isn't extreme a study by Security.org found that the median value of stolen merchandise was $50 in 2022 the absolute level of package theft is high and rising. In 2022, 260 million delivered packages were stolen, according to home security consultant SafeWise, up from 210 million packages the year before. All in all, it estimated that 79% of Americans were victims of porch pirates last year.

In response, some of the big logistics companies have introduced technologies and programs designed to stop the crime wave.One of the most recent examples set to soon go into wider deployment came in June from UPS, with its API for DeliveryDefense, an AI-powered approach to reducing the risk of delivery theft. The UPS tech uses historic data and machine learning algorithms to assign each location a "delivery confidence score," which is rated on a one to 1,000 scale.

"If we have a score of 1,000 to an address that means that we're highly confident that that package is going to get delivered," said Mark Robinson, president of UPS Capital. "At the other end of the scale, like 100 ... would be one of those addresses where it would be most likely to happen, some sort of loss at the delivery point," Robinson said.

Powered by artificial intelligence, UPS Capital's DeliveryDefense analyzes address characteristics and generates a 'Delivery Confidence Score' for each address. If the address produced a low score, then a package recipient can then recommend in-store collection or a UPS pick-up point.

The initial version was designed to integrate with the existing software of major retailers through the API a beta test has been run with Costco Wholesale in Colorado. The company declined to provide information related to the Costco collaboration. Costco did not return a request for comment.

DeliveryDefense, said Robinson, is "a decent way for merchants to help make better decisions about how to ship packages to their recipients."

To meet the needs of more merchants, a web-based version is being launched for small- and medium-sized businesses on Oct. 18, just in time for peak holiday shipping season.

UPS says the decision about delivery options made to mitigate potential issues and enhance the customer experience will ultimately rest with the individual merchant, who will decide whether and how to address any delivery risk, including, for example, insuring the shipment or shipping to a store location for pickup.

UPS already offers its Access Points program, which lets consumers have packages shipped to Michaels and CVS locations to ensure safe deliveries.

UPS isn't alone in fighting porch piracy.

Among logistics competitors, DHL relies on one of the oldest methods of all a "signature first" approach to deliveries in which delivery personnel are required to knock on the recipient's door or ring the doorbell to obtain a signature to deliver a package. DHL customers can opt to have shipments left at their door without a signature, and in such cases, the deliverer takes a photo of the shipment to provide proof for delivery. A FedEx rep said that the company offers its own picture proof of delivery and FedEx Delivery Manager, which lets customers customize their delivery preferences, manage delivery times and locations, redirect packages to a retail location and place holds on packages.

Amazon has several features to help ensure that packages arrive safely, such as its two- to four-hour estimated delivery window "to help customers plan their day," said an Amazon spokesperson. Amazon also offers photo-on delivery, which offers visual delivery confirmation and key-in-garage Delivery, which lets eligible Amazon Prime members receive deliveries in their garage.

Amazon has also been known for its attempts to use new technology to help prevent piracy, including its Ring doorbell cameras the gadget maker's parent company was acquired by the retail giant in 2018 for a reported $1 billion.

Camera images can be important when filing police reports, according to Courtney Klosterman, director of communications for insurer Hippo. But the technology has done little to slow porch piracy, according to some experts who have studied its usage.

"I don't personally think it really prevents a lot of porch piracy," said Ben Stickle, a professor at Middle Tennessee State University and an expert on package theft.

Recent consumer experiences, including the iPad theft example in Virginia, suggest criminals may not fear the camera. Last month, Julie Litvin, a pregnant woman in Central Islip, N.Y., watched thieves make off with more than 10 packages, so she installed a doorbell camera. She quickly got footage of a woman stealing a package from her doorway after that.She filed a police report, but said her building's management company didn't seem interested in providing much help.

Stickle cited a study he conducted in 2018 that showed that only about 5% of thieves made an effort to hide their identity from the cameras. "A lot of thieves, when they walked up and saw the camera, would simply look at it, take the package and walk away anyway," he said.

SafeWise data shows that six in 10 people said they'd had packages stolen in 2022. Rebecca Edwards, security expert for SafeWise, said this reality reinforces the view that cameras don't stop theft. "I don't think that cameras in general are a deterrent anymore," Edwards said.

The increase in packages being delivered has made them more enticing to thieves. "I think it's been on the rise since the pandemic, because we all got a lot more packages," she said. "It's a crime of opportunity, the opportunity has become so much bigger."

Edwards said that the two most-effective measures consumers can take to thwart theft are requiring a signature to leave a package and dropping the package in a secure location, like a locker.

Large lockboxes start at around $70 and for the most sophisticated can run into the thousands of dollars.

Stickle recommends a lockbox to protect your packages. "Sometimes people will call and say 'Well, could someone break in the box? Well, yeah, potentially," Stickle said. "But if they don't see the item, they're probably not going to walk up to your house to try and steal it."

There is always the option of leaning on your neighbors to watch your doorstep and occasionally sign for items. Even some local police departments are willing to hold packages.

The UPS AI comes at a time of concerns about rapid deployment of artificial intelligence, and potential bias in algorithms.

UPS says that DeliveryDefense relies on a dataset derived from two years' worth of domestic UPS data, encompassing an extensive sample of billions of delivery data points. Data fairness, a UPS spokeswoman said, was built into the model, with a focus "exclusively on delivery characteristics," rather than on any individual data. For example, in a given area, one apartment complex has a secure mailroom with a lockbox and chain of custody, while a neighboring complex lacks such safeguards, making it more prone to package loss.

But the UPS AI is not free. The API starts at $3,000 per month. For the broader universe of small businesses that are being offered the web version in October, a subscription service will be charged monthly starting at $99, with a variety of other pricing options for larger customers.

View original post here:

AI is policing the package theft beat for UPS as 'porch piracy' surge continues across U.S. - CNBC

Read More..

Ray Dalio says AI will greatly disrupt in our lives within a yearyou should be both excited and scared of it – CNBC

Billionaire investor Ray Dalio is sure that artificial intelligence will soon be a "great disruptor" in all of our lives for both better and worse.

AI will help people make strides in productivity, education, healthcare and even usher in a three-day workweek, Dalio said on Tuesday at Fast Company's Innovation Festival 2023.On the other hand, it'll likely "disrupt jobs" and be a cause of "argument" for employees and legislators who support halting or slowing down AI's evolution, he said.

"All these changes are going to happen in the next five years," Dalio, the founder of hedge fund giant Bridgewater Associates, added. "And when I say [that], I don't mean five years from now. I mean that you're going to see [changes] next year ... the next year, [even bigger] changes. It's all going to change very fast."

Some developments are already in motion. ChatGPT has swiftly exceeded most people's expectations, passing Wharton MBA exams and allegedly helping someone win the lottery less than a year after its November 2022 launch.

Job disruptions may also be underway: As more than 100,000 actors strike for better wages, the Alliance of Motion Picture and Television Producers (AMPTP) is lobbying to replace some of them with artificial intelligence.

The trend could expand to other industries soon. Forty-nine percent of U.S. CEOs and C-suite executives say their current workforce's skills won't be relevant by 2025, according to a survey from online education platform edX published on Tuesday.

In the same survey, executives said they're alreadytrying to hire AI-savvy employees, with 87% citing that effort as a struggle. That could open up a lane of opportunity for workers, who can learn and use AI skills to make some extra cash.

"There are many online learning opportunities to understand how AI works, which then could help [someone] possibly become an AI tutor, or to do some AI training to pass it on to the next generation," Susan Gonzales, CEO and founder of nonprofit AIandYou, told CNBC Make It in July.

Just about everyone, from entrepreneurs and freelancers to full-time office workers, could stand to benefit from learning more about AI, Gonzales said.

Whether you're excited, curious or flat-out scared, "now would be the time to increase your knowledge," she added.

DON'T MISS: Want to be smarter and more successful with your money, work & life?Sign up for our new newsletter!

Want to earn more and land your dream job?Join the free CNBC Make It: Your Money virtual eventon Oct. 17 at 1 p.m. ET to learn how to level up your interview and negotiating skills, build your ideal career, boost your income and grow your wealth. Register for free today.

Original post:

Ray Dalio says AI will greatly disrupt in our lives within a yearyou should be both excited and scared of it - CNBC

Read More..

SAS unveils plans to add generative AI to analytics suite – TechTarget

Four months after committing to invest $1 billion in advanced analytics and AI, longtime BI vendor SAS institute Inc. unveiled how it plans to make generative AI part of that investment.

SAS's May commitment to spend $1 billion on developing advanced analytics and AI capabilities marked the second time the vendor revealed such plans. The first was in 2019, and over the next few years, the vendor used the allocated funds to overhaul its Viya platform.

SAS re-architected Viya in 2020 to make it fully cloud native and added augmented intelligence capabilities such as natural language processing, computer vision and predictive analytics.

In addition, the vendor built industry-specific versions of its platform.

Those vertical editions are now the focal point of SAS' second $1 billion investment in advanced analytics and AI. They are the vehicles through which the vendor plans to incorporate generative AI.

SAS, based in Cary, N.C., unveiled its generative AI strategy on Sept. 12 during Explore, a user conference held in Las Vegas. Its generative AI capabilities are now in private preview.

While Viya is available to customers as a general-purpose analytics platform they can tailor to suit their needs, SAS also offers a variety of industry-specific versions of its tools.

Industries served by editions of SAS's platform range from agriculture to manufacturing and, among others, include banking, education, healthcare, retail and consumer goods, sports, and utilities.

In addition, there are versions of Viya tailored for topics such as fraud and security, marketing and risk management.

In May, SAS said its plan for advanced analytics and AI is to develop additional tailored versions of its tools and upgrade those that have already been built.

At the time, however, although many of its competitors had already unveiled their plans for generative AI, the vendor did not reveal an intent to incorporate generative AI as part of its new $1 billion allocation.

Instead, SAS executive vice president and CIO Jay Upchurch said the vendor was taking a cautious approach to generative AI given concerns about the accuracy and security of large language models (LLMs) trained on public data.

Now, SAS has revealed that the core of its initial approach to generative AI will be to integrate third-party LLM technology with its existing industry-specific tools.

Also part of its generative AI strategy is the use of generative adversarial networks (GANs) to create synthetic data and the application of natural language processing capabilities to digital twins.

GANs can be used to reflect real-world environments and train generative AI models while simultaneously protecting the privacy and security of an organization's real data. Natural language interactions with digital twins, meanwhile, enable more efficient scenario planning to understand what actions to take under various circumstances.

Not surprisingly given its hesitation in May to unveil plans for generative AI, SAS's initial approach to generative AI is a cautious one, according to Doug Henschen, an analyst at Constellation Research.

Rather than unveil an entire new environment for generative AI development as Domo and Qlik have or acquire generative AI specialists as Databricks and Snowflake have, SAS's initial plans are instead center on combining generative AI with existing capabilities.

"SAS is being characteristically conservative on generative AI developments, highlighting existing investments in synthetic data generation and digital twin simulations, and pointing to integrations and private-preview experimentation with third-party large language models," Henschen said.

That measured approach, however, is not unusual for SAS and may be something the vendor's customers appreciate, he continued.

"SAS has long been conservative and that seems to appeal to many of its risk-averse customers in banking, insurance, healthcare, manufacturing and other industries," Henschen said. "I've seen a lot of general-purpose generative AI capabilities introduced. ... SAS hasn't jumped on that bandwagon."

Some of the key benefits of generative AI result from its improvement of natural language processing, enabling true freeform natural interaction with data rather than requiring users to phrase queries in specific ways and otherwise not understanding the queries.

Because LLMs have vast vocabularies and can understand natural language, they have the potential to make trained data workers more efficient by reducing the amount of code they need to write and open analytics to more business users by lessening the amount of training needed to use BI platforms.

SAS's generative AI plans include improved NLP so that users can be more efficient by asking questions of their data and receiving responses in natural language, according to Bryan Harris, the vendor's chief technology officer.

But SAS also wants to apply that improved NLP and other generative AI capabilities to address distinct circumstances, which is why the vendor is taking an industry-specific approach to training language models.

"We're looking at generative AI from an industry perspective because there are more concrete use cases to apply it to," Harris said. "Customers are asking us how they can apply generative AI to their environment, and that comes to a targeted industry use case. We think it's better to focus this way because it leads to measurable output."

SAS has a longstanding partnership with Microsoft. As the vendor develops its generative AI capabilities, it is using models from Microsoft Azure OpenAI as building blocks from which it can then add domain-specific data to train the models.

In May, however, SAS wasn't yet ready to start building generative AI capabilities due to security and accuracy concerns.

Harris noted that SAS serves customers in banking, healthcare, life sciences and other highly regulated industries in which data security and accuracy are critical. Before SAS was willing to add generative AI and language model capabilities, it wanted to figure out how to ensure the security of customers' data and reduce the risk of AI models delivering incorrect query responses.

Microsoft's Azure OpenAI provides an environment where SAS can protect customers' data, according to Harris. SAS' data lineage capabilities, meanwhile, enable users to understand whether an AI response can be trusted.

"We needed to see the cloud architecture and the maturity in that to emerge such that we could have a confident conversation with a customer saying that they don't have to worry about data leakage," Harris said. "We have assurances for all that through our partnership with Microsoft and its infrastructure. Second, we needed to see accuracy. We don't have the luxury of being right only sometimes."

SAS is being characteristically conservative on generative AI developments, highlighting existing investments in synthetic data generation and digital twin simulations, and pointing to integrations and coming private-preview experimentation with third-party large language models. Doug HenschenAnalyst, Constellation Research

Beyond generative AI plans, SAS unveiled Viya Workbench and the SAS App Factory, new software-as-a-service development environments in Viya now in preview with general availability planned for early 2024.

Viya Workbench is designed to help developers quickly get started building AI and machine learning models using code. Developers can use one of three coding languages -- Python, R or SAS's own language -- to build and train their analytics models while Workbench provides a cloud-native, efficient and secure environment.

Because it's a SaaS tool, it provides developers with an environment that takes minutes to start using rather than requiring hours or days to install and deploy, according to Harris.

The SAS App Factory, meanwhile, provides prebuilt analytics and AI applications that automate the setup and integration of a cloud-native ecosystem built with the React architecture, open source programming language TypeScript and PostgreSQL database.

Using the prebuilt tools -- the first two of which are the SAS Energy Forecasting Cloud and an application developed by Cambridge University Hospitals to improve health care outcomes -- customers can customize and deploy AI-driven applications designed to address specific needs.

The significance of both new services is the potential for increased efficiency, according to Henschen.

"The coming SAS Viya Workbench and SAS App Factory SaaS services promise to accelerate the development of AI- and ML-based applications," he said.

Eric Avidon is a senior news writer for TechTarget Editorial and a journalist with more than 25 years of experience. He covers analytics and data management.

View post:

SAS unveils plans to add generative AI to analytics suite - TechTarget

Read More..

How the Human Element Balances AI and Contributor Efforts for … – Appen

We are committed to delivering dependable solutions to power artificial intelligence applications, and our Crowd plays a crucial role in accomplishing this objective. With a global community of over one million contributors, our diverse Crowd provides invaluable feedback on our clients AI models. Their collective expertise enhances operational efficiency and customer satisfaction, making them indispensable to our business success.

Given the significance of our Crowd, it is vital to consistently attract top-tier contributors who can provide quality feedback on our clients models. To achieve this, we have implemented state-of-the-art machine learning and statistical models that quantify essential contributor traits such as behavior, reliability, and commitment. These advanced models offer crucial insights to our recruiting and Crowd management teams, enabling them to streamline processes, assign relevant tasks to the most qualified contributors, and meet our customers talent requirements more effectively than ever before.

The challenge at hand is to identify the most skilled contributors for a specific task on a large scale. If our work at Appen involved only a limited number of AI models and a small group of individuals providing feedback, it would be a straightforward task to determine which contributors should receive priority for specific tasks. However, the reality is that we are often concurrently managing numerous projects for a single client that require extensive feedback from a diverse range of contributors. To effectively serve our clients, we must efficiently oversee hundreds of thousands of people across global markets and make dynamic decisions regarding the prioritization of their unique skills. This is where the field of data science comes into play, enabling us to navigate this complex landscape.

We are currently developing a robust model to evaluate contributors based on their profile information, historical behaviors, and business value. This model generates a score to assess their suitability for specific projects. By implementing a precise and logical scoring system, we empower our operations teams to efficiently screen, process, and support our contributors.

Our primary goal is to achieve high accuracy and efficiency while working within limited time and resources. Heres how our data-driven system will assist us in making well-informed decisions regarding contributor management and recruitment:

The result? Streamlined project delivery and an exceptional experience for our contributors and clients.

Having acquired a comprehensive grasp of our overarching strategy, lets now delve into a more intricate exploration of the technologys inner workings. Well explore the data and operational procedures that are poised to revolutionize our approach to contributor management and recruitment.

1. Building a solid foundation: constructing the feature store

To ensure a thorough representation of contributors, we construct a feature store. This hub serves as an organized repository for capturing vital information related to their readiness, reliability, longevity, capacity, engagement, lifetime value, and other quality assessment signals. By generating detailed profiles, this powerful store enables us to precisely evaluate the quality of contributors.

2. Addressing the cold start challenge

We acknowledge that newly registered contributors present the unique challenge of onboarding and evaluation. To overcome the potential limitations of a cold start, we leverage the collective knowledge of contributors within the same locales. By approximating descriptions based on statistically aggregated group data, we ensure inclusivity and extend our reach to a diverse pool of talent.

3. Choose, Apply, and Refine: Unleashing the Power of Algorithms

At Appen, we use many ranking heuristics and algorithms to evaluate our data. Among the most effective types are multiple-criteria decision-making algorithms. This lightweight yet powerful methodology comprehensively handles scores, weights, correlations, and normalizations, eliminating subjectivity and providing objective contributor assessments.

The following diagram illustrates the high-level procedures of how multiple-criterial decision-making algorithms solve a ranking and selection problem with numerous available options.

4. Model training and experimentation: tailoring to unique business requirements

Considering our diverse range of use cases, recruiting and crowd management teams often require different prioritizations based on specific business needs. We adopt a grid search approach to model training, exhaustively exploring all possible combinations of scoring, weighting, correlation, and normalization methods. This process implicitly learns the optimal weights for input features, ensuring a tailored approach to each unique business use case.

5. Simulating A-B testing: choosing the best model candidates

To select the models that best align with our clients business use cases, we conduct rigorous A-B testing. By simulating the effects of new model deployments and replacements, we compare different versions of the experiment group against a control group. We meticulously analyze contributor progress, measuring the count and percentage of contributors transitioning between starting and ending statuses. This data-driven approach helps us identify the model candidates that yield the most significant improvements over our current baseline.

6. Interpretation and validation: understanding the models

Once we have a set of predictions and comparisons, we dive deep into understanding and validating the models. We review model parameters, including weights, scores, correlations, and other modeling details, alongside our business operation partners. Their valuable insights and expertise ensure that the derived parameters align with operational standards, allowing us to make informed decisions and provide accurate assessments.

7. Expanding insights: additional offerings by ML models

Our machine learning (ML) models not only provide scores and rankings but also enable us to define contributor quality tiers. By discretizing scores and assigning quality labels such as Poor, Fair, Good, Very Good, and Exceptional, we offer a consistent and standardized interpretation of quality measurements. This enhancement reduces manual efforts, clarifies understanding, and improves operational efficiency.

Contributor recruitment and management are complex processes, but through data-driven decisions and intelligent resource allocation, were transforming the business landscape. By prioritizing relevant contributors based on their qualities, we optimize project delivery, create delightful customer experiences, and achieve a win-win-win outcome for Appen, our valued customers, and our dedicated contributors.

Together, lets unlock the power of AI for good and shape a future where technology drives positive change. Join us on this exciting journey as we build a better world through AI.

View original post here:

How the Human Element Balances AI and Contributor Efforts for ... - Appen

Read More..