Page 932«..1020..931932933934..940950..»

Researchers are developing artificial intelligence that will detect … – Sciencenorway

Artificial intelligence (AI) can be useful in healthcare.

AI can help with interpreting images and free up time forthe radiologists.

A new study from Sweden showed, for example, thatAI-supported mammography led to 20 per cent more cancer cases being detected, accordingto NRK (link in Norwegian).

The EU research project AI-Mind focuses on artificial intelligence and health.

The goal is to be able to identify who in the group with mildcognitive impairment is at high risk of developing dementia. They could be identifiedseveral years before a diagnosis is made today.

The research is led by Ira Haraldsen at Oslo UniversityHospital.

People with mild cognitive impairment have begun to experience that their memory is failing and have some problems with reasoning and attention. But that does not necessarily mean they have, or will develop,dementia.

The background for our project is a worldwide clinicalneed. Currently, we are not able to predict your risk of developing dementia ifyou are affected by mild cognitive impairment, Ira Haraldsen said during a recent event at Arendalsuka, an annual political festival in Norway.

She believes that the dementia diagnosis comes too late. Itcomes after clear symptoms have appeared.

By then, you can alleviate symptoms but you cant affect thecourse of the disease. What we want is to shift the diagnosisinto another time window, she said.

The research group plans to create a tool based onartificial intelligence for screening, or mass examination, of the population,Haraldsen explains in an interview with sciencenorway.no. Screening involvesexamining healthy people to detect disease or precursors to disease before symptomsappear.

The dream is population-based screening of, for example,all 55-year-olds, she said.

If it turns out that you are at high risk, you will befollowed up, and all risk factors contributing to dementia should be corrected,according to Haraldsen.

Ira Haraldsen is a psychiatrist and researcher at Oslo University Hospital. (Photo: Tone Herregrden)

The study will include 1,000 participants from Norway,Finland, Italy, and Spain.

Participants from Norway and Italy have already beenrecruited. There are still some missing from Spain and Finland.

The participants are between 60 and 80 years old and havemild cognitive impairment.

What is interesting is that among people with mildcognitive impairment, 50 per cent develop dementia and 50 per cent do not. Doctorstoday dont know which group you belong to, Haraldsen said.

Researchers in AI-Mind aim to separate these two groups. Whois on the way to developing dementia, and who can be reassured?

Karin Persson is a postdoctoral fellow at the Norwegian NationalCentre for Ageing and Health and is researching dementia.

She is not part of the project and writes in an email to sciencenorway.nothat AI-Mind is one of several large projects now trying to find effective waysto diagnose cognitive impairment and dementia early on.

Common to these new projects is the use of artificialintelligence and focus on developing methods that can predict which people withearly symptoms will develop dementia, she said.

The difference between various projects is the variables theyinput into the models: whether its cognitive tests, EEG, MRI images, geneticdata, biomarkers from spinal fluid, blood, and other imaging diagnostics, Perssonexplained.

I think artificial intelligence is here to stay, I believethat this is the way forward for effective diagnostics in this field, shewrites.

Participants in AI-Mind will take part in four studies overtwo years.

An electroencephalography (EEG) examination will then beconducted. This is an examination where a cap with electrodes measureselectrical activity in the brain.

Blood samples are taken, and participants take a testregarding their ability to think and remember.

Over the two years, researchers will see who gets worse andwho stays the same or improves.

Two algorithms will be trained to predict this. One istrained on EEG examinations. It analyses how different areas of the brain communicatewith each other.

It has been known for a long time that this changes when dementiadevelops, Haraldsen explains.

An example of EEG. (Photo: Svitlana Hulko / Shutterstock / NTB)

Haraldsen compares what happens in the brain to a footballteam.

When youre very good at football, the ball is constantlypassed from one player to another, back and forth. Then suddenly, Haaland stormstowards the goal, and then manages to score, she said at the event. That's how the brain is also constructed. It works all the time, whether we are asleep orawake. All areas are chaotically in contact all the time. Then a task comes along,and we do it.

You can see the difference between a football team that collaborateswell and one that doesn't. In the latter, maybe only two players pass to eachother and exclude the others.

This is something that happens in the early stages ofdementia and mild cognitive impairment. Some areas communicate too frequentlywith each other, and others are given lower priority, Haraldsen said.

Researchers are testing two types of artificialintelligence, which Haraldsen describes as classical machine learning and deeplearning.

They are asked to divide the participants into two groupsbased on EEG examinations: Those who will deteriorate and those who will not.

The classical machine learning algorithm is asked to lookfor characteristics that researchers know are indications of early-onsetdementia.

The deep learning model has more freedom and finds its ownpatterns. Many experts believe that this is the future of artificialintelligence in healthcare, Haraldsen explained. But it is more challenging tounderstand what the machine is doing and how it arrives at the answer.

Researchers are comparing whether humans and machines cometo the same result.

Eventually, a new artificial intelligence will use the EEGanalysis, along with the results from blood tests and the mental test, to saywho is at high risk of developing dementia.

It is known that changes in blood tests can be measuredseveral years before a patient receives a dementia diagnosis. Blood tests thatcan reveal the beginning of the disease have already been successfully tested.

The researchers believe that the AI they are developing can uncoverearly-stage dementia two to three years before a diagnosis is usually made.Later, it may be possible to push the time window even a couple of yearsearlier, Haraldsen believes.

The artificial intelligence is planned to be ready for usein 2026. But Haraldsen points out that part two of the study will be necessaryfor the algorithm to be approved for the market.

The results from the artificial intelligence must becompared to the most reliable way to diagnose dementia, which is to take aspinal fluid sample and imaging diagnostics with MRI or PET, Haraldsen explains.

Today there is no treatment that can cure dementia andAlzheimer's disease. Is there then any advantage to being diagnosed earlierthan today?

Karin Persson at the National Centre for Ageing and Healthanswers:

There is much ongoing research on the development ofdisease-modifying drugs, that is, medicines that do not only affect thesymptoms of dementia but can actually stop the disease development.

This especially applies to Alzheimer's disease, the mostcommon cause of dementia worldwide.

If these types of medications are to be effective, it willbe crucial to get to it early in the disease process before the brain is too attackedand damaged, she says.

There is a reason why there is a focus on early diagnosticsnow.

At the same time, there may be ethical challenges withgiving an early diagnosis, especially in cases where the disease cannot betreated. People who are working with this are concerned about these ethical challenges, Perssonsays.

The diagnosis must come at the right time. However, there are more treatments that are relevant to dementia other than medication, even if wecurrently have no cure to stop it, she says.

Patients who notice changes in their memory and thinking,i.e., their cognitive function, often want information about the cause, Perssoncontinues.

But it is essential that we balance correctly and that ethicalconsiderations are included in guidelines for diagnosis, she says.

There is a lot happening in the field when it comes to earlydiagnosis and medication.

Three new drugs have been approved in the USA(link in Norwegian). They arebased on removing amyloid plaques, a protein that accumulates in the brain withAlzheimer's disease, Persson explains.

They are being assessed by the European Medicines Agency.However, the effects of the medications are relatively small, and the sideeffects are potentially serious, the researcher explains.

So far, the follow-up time in the studies has beenrelatively short, and it will be interesting to see how the patients fare overa longer period, she says.

Overall, these are not medications that will be given to allpeople with Alzheimer's disease. Disease stage in the patient, risk of sideeffects, expected effect, and price will be important factors, Persson explains.

Regarding early diagnosis, an essential development is methodsfor looking at markers in blood. There have been good results here, which canmake early diagnosis easier.

Again, you have to have a clear thought about who will betested when these methods become clinically available. Currently, they are usedin research in Norway, with ethical principles in mind, Persson says.

Conflict of interest: Ira Haraldsen is chairman andco-founder of the company BrainSymph, as a spin-off of the AI-Mind project.

Translated by Alette Bjordal Gjellesvik.

Readthe Norwegian version of this article on forskning.no

Read the original post:
Researchers are developing artificial intelligence that will detect ... - Sciencenorway

Read More..

1 Artificial Intelligence (AI) Stock to Buy Hand Over Fist Before It … – The Motley Fool

Nvidia's valuation has gone through the roof in 2023 thanks to a huge rally in the company's stock price. Shares of the chipmaker have shot up roughly 230%so far this year, driven primarily by the artificial intelligence (AI) arms race that has led to booming demand for its data center graphics cards that are being deployed for training AI models.

Nvidia now commandsa price-to-sales (P/S) ratio of 36. Its price-to-earnings (P/E) ratio stands at a whopping 113. For comparison, Nvidia's average five-year sales and earnings multiples stand at 19 and 73, respectively, indicating just how big a premium the stock commands right now.

Those multiples may seem justified considering Nvidia's dominant position in the market for AI chips. After all, the semiconductor giant reportedly holdsmore than 80% of the AI chip market, according to third-party estimates. This massive market share is expected to drive the company's data center revenue to more than $31 billionin the current fiscal year. That would be more than double Nvidia's fiscal 2023 data center revenueof $15 billion.

Given that the company got 55% of its revenue from selling data center chips last year, the big surge in this segment that's anticipated this year could give a serious boost to Nvidia's business and help it justify its valuation. However, not everyone may be comfortable paying such a rich multiple for Nvidia's anticipated growth.

That's why now would be a good time to take a closer look at Advanced Micro Devices (AMD 0.44%), as potential developments in the semiconductor industry suggest that it could become a big beneficiary thanks to the growing demand for AI chips. Let's look at the reasons why.

Nvidia may have cornered a big share of the AI chip market, but Taiwan-based daily newspaper DigiTimes suggests that AMD could play a key role in this space. DigiTimesreports (via Tom's Hardware) that foundry giant Taiwan Semiconductor Manufacturing, popularly known as TSMC, has placed orders for additional tools that will be used in chip-on-wafer-on-substrate (CoWoS) packaging.

CoWoS is an advanced packing process that allows TSMC to integrate high-bandwidth memory (HBM) -- which is used in AI serversto enable fast data transmissionand greater storage capacity while keeping power consumption low -- with high-performance computing chips. The demand for this packaging solution is expected to grow between 30% and 40% in 2024, accordingto market research firm TrendForce.

This explains why TSMC is expectedto increase its CoWoS capacity from 8,000 silicon wafers a month currently to 11,000 wafers a month by the end of the year. TSMC is expected to further increase its monthly CoWoS capacity to a range of 14,500 to 16,600 wafers a month by the end of 2024, which would be nearly double the current levels. These wafers are used to makeintegrated circuits such as graphics processing units (GPUs), central processing units (CPUs), and others.

Nvidia, for instance, reportedly produces around 60 of its A100 and H100 data center GPUs from each wafer packaged using CoWoS. So, if TSMC can double its CoWoS capacity by the end of next year to 16,000 wafers a month, companies like Nvidia and AMD may be able to manufacture around 960,000 AI GPUs a month. DigiTimes' sources indicatethat TSMC's CoWoS shipments to AMD, following the launch of its MI300X accelerators laterthis year, could be half of those that it ships to Nvidia each quarter.

If that's indeed the case, AMD could be shipping around 320,000 data center GPUs a month compared to Nvidia's 640,000. In simpler words, this assumption suggests that AMD could corner a third of the data center GPU market. That could be huge for AMD given that it's currently a very small player in front of Nvidia in the market for AI chips.

AMD hasn't revealed any potential price at which it is going to launch its MI300X accelerators. The specs of the chip, however, indicate that it could give Nvidia stiff competition. The MI300X is reportedly going to be equipped with 192GB (gigabytes) of high-bandwidth memory, which is greater than the 120GB found on Nvidia's H100 data center GPU. This theoretically means that AMD's processor should be able to train bigger large-language models that power generative AI applications.

Now, each H100 GPU is reportedly pricedat $30,000 or more. If AMD decides to undercut Nvidia and prices its AI accelerator at, let's say, even $20,000 a chip, it could generate $6 billion a month or $72 billion a year in revenue. There is no doubt that these numbers look very optimistic given that AMD is expectedto generate just under $23 billion in revenue this year.

But at the same time, investors shouldn't forget that AI has helped give Nvidia's top line a massive boost, and the same is likely to continue in the coming years.

NVDA Revenue Estimates for Next Fiscal Year data by YCharts

So, it won't be surprising to see AMD getting a nice shot in the arm, and that would be true even if it corners a quarter of the revenue opportunity projected above. All this indicates that AMD stock could be set up for a big rally in the future thanks to AI, which is why investors should consider buying the stock right away.

AMD's 70% surge in 2023 has brought the stock's P/S ratio to 8, compared to just over 4 at the end of 2022. But that's still way cheaper than Nvidia's sales multiple. Also, AMD is trading at 39 times forward earnings, compared to Nvidia's multiple of55. In all, investors are getting a relatively good deal on AMD right now considering how AI is likely to supercharge its business.

Go here to see the original:
1 Artificial Intelligence (AI) Stock to Buy Hand Over Fist Before It ... - The Motley Fool

Read More..

Use of Artificial Intelligence in Calling Activity Presents TCPA … – JD Supra

Artificial Intelligence (AI) is in the spotlight, and there are many eager to adopt such technology. For businesses that have incorporated or are seeking to incorporate AI into their processes, applicable legal restrictions and regulations are a consideration. Definitive AI-targeted laws are still developing, with government working groups evaluating how best to regulate such technology. In the meantime, the answer lies within existing statutory and regulatory frameworks.

When it comes to the use of AI in outbound calling, the Telephone Consumer Protection Act (TCPA) and its implementing regulations provide one such framework. The following are a few ways the TCPA may apply to the use of AI:

The foregoing is merely a nonexclusive list of the myriad ways AI could trigger additional compliance considerations and exposure in the context of telephone calls. Businesses seeking to adopt AI should consider the ways in which its use may trigger additional consumer protection obligations, impact the businesss existing compliance program and create additional exposure.

Read more:
Use of Artificial Intelligence in Calling Activity Presents TCPA ... - JD Supra

Read More..

3 Artificial Intelligence (AI) Stocks With More Potential Than Any … – The Motley Fool

According to data from Grand View Research, the world's most popular cryptocurrency, Bitcoin, is projected to have a compound annual growth rate (CAGR) of 27% through 2030. Meanwhile, the artificial intelligence (AI) market is expected to have a CAGR of 36% in the same period.

While other crypto options might offer more growth, it's hard to justify passing up the reliability of the tech market for the potential volatility that has become commonplace among cryptocurrencies.

The launch of OpenAI's ChatGPT last November kicked off a boom in AI and has attracted countless tech companies. It is expected to boost numerous sectors across tech, including cloud computing, healthcare, consumer products, education, and more. As a result, investing in the companies pushing the technology could be a way to enjoy significant gains over the long term.

Here are three artificial intelligence stocks with more potential than any cryptocurrency.

Advanced Micro Devices (AMD 0.44%) has a history of outperforming the crypto market. Its stock has climbed about 425% in the last five years, with Bitcoin's price rising 310% in the same period. Even since the start of 2023, AMD has offered more growth than the cryptocurrency.

The chipmaker is investing heavily in AI, pouring significant resources into developing hardware that will match market leader Nvidia's products.

In June, it debuted the MI300X, which the company calls its most powerful graphics process unit (GPU). It remains to be seen what companies will use the MI300X, but AMD has lots of support since increased competition will bring down the cost of chips.

It also expanded its position in AI last year with the acquisition of Xilinx, which has developed software that uses AI accelerators. And that experience in the field led AMD to bring on Victor Peng, former president of Xilinx, as its head of AI strategy.

As a leading chipmaker, AMD has great possibilities in AI, with its growth history giving it more potential than any cryptocurrency, making its stock an attractive buy this month.

Amazon (AMZN 1.08%) has exciting prospects in artificial intelligenceas the home of the world's largest cloud platform, Amazon Web Services (AWS). The company has made a major push into the technology this year by unveiling several new AI tools on that platform.

In June, the company debuted Bedrock, a service that helps clients build chatbots and image generators. Another new service, CodeWhisperer, is geared toward developers by producing code. And HealthScribe can transcribe meetings between doctors and patients.

These new services attracted thousands of customers to AWS, with Sony, Ryanair, and Sun Life Financial among the companies recently signing on to use Bedrock.

Amazon is further diversifying its position in AI by becoming one of the first cloud companies to venture into chip production. The company announced in June that it had developed two new chips, promising the best price-to-performance in the market. The tech giant's brand recognition and dominance in the cloud industry could attract many companies to its hardware.

Amazon's stock has increased by 61% since Jan. 1, outperforming Bitcoin and Ethereum. With expanding positions in multiple areas of AI, the company could offer far more gains over the long term.

Apple (AAPL 1.26%) holds leading market shares in smartphones, tablets, smartwatches, and headphones. This command over tech gives it the potential to become a leading driver in the public adoption of AI. Studies have proven consumers' preference for Apple's products, which will be the tools to get AI into the hands of millions of users worldwide.

In early August, Tim Cook said in an earnings call that an increase in research and development spending was driven primarily by work on generative AI. And Bloomberg reported in July that Apple has built its own framework for creating large language models and has used it to develop an AI chatbot that its engineers have dubbed Apple GPT.

The iPhone maker is gradually introducing AI-enabled features across its product lineup. In June, Apple announced a revamp to the iPhone's autocorrect that uses a language model to better predict users' texting styles. AirPods Pro will automatically turn off noise canceling once the wearer starts a conversation, a feature made possible by AI.

Apple is the world's most valuable company, becoming the first to achieve a market cap of $3 trillion earlier this year. The company's journey in AI isn't as far along as it is for companies like Microsoft or Amazon. However, that could mean its stock has more room to run. Meanwhile, its dominance in multiple areas of tech and vast resources could take it far with AI. And with that, Apple's stock is a far better bet than any cryptocurrency.

John Mackey, former CEO of Whole Foods Market, an Amazon subsidiary, is a member of The Motley Fool's board of directors. Dani Cook has no position in any of the stocks mentioned. The Motley Fool has positions in and recommends Advanced Micro Devices, Amazon.com, Apple, Bitcoin, Ethereum, Microsoft, and Nvidia. The Motley Fool has a disclosure policy.

Read more:
3 Artificial Intelligence (AI) Stocks With More Potential Than Any ... - The Motley Fool

Read More..

If I Had to Buy 1 Trillion-Dollar Artificial Intelligence Stock, This … – The Motley Fool

"A new computing era has begun."

That's a quote from Jensen Huang, the CEO of semiconductor giant Nvidia (NVDA -2.43%). His company has an estimated 90% market share in the data center chips designed to handle artificial intelligence (AI) workloads, an achievement that has driven its stock price to more than triple in 2023 (so far).

Nvidia just released its financial results for the fiscal 2024 second quarter (ended July 30), and it blew away all expectations -- including its own -- on the back of that dominant position in the AI chip space.

Earlier this year, the company joined tech giants Apple, Microsoft, Amazon, and Google parent Alphabet with a valuation of more than $1 trillion. While all of those companies are experimenting with AI in various capacities, none of them is as critical to the industry as Nvidia, so here's why it's my pick of the bunch.

The artificial intelligence boom of 2023 was ignited by OpenAI's online chatbot, ChatGPT. It was the first time investors and consumers could truly interact with AI at scale, after years spent listening to predictions about its ability to reshape the future. But OpenAI could never have reached this point without Nvidia; in 2016, Huang delivered the first AI supercomputer to the start-up, and it has been using Nvidia's semiconductor hardware ever since.

In fact, Huang has been nicknamed the "Godfather of AI" for his commitment to developing the technology when most other companies treated it like a futuristic dream. In a recent keynote address, he told the audience he "bet the company" on AI five years ago in 2018, by electing to reinvent the graphics chip and the software to go with it.

Fast forward to the here and now, and Nvidia's latest A100 and H100 data center chips are the gold standard for high-performance computing. They are sought-after by every major cloud services provider from Amazon Web Services to Microsoft Azure to Oracle Cloud Infrastructure, as these platforms seek to provide AI capabilities to their business customers.

Huang believes there is $1 trillion worth of existing data center infrastructure that needs to be upgraded to support accelerated computing and AI. The only real contender capable of competing with Nvidia in this space is Advanced Micro Devices, but its latest MI300 AI data center chips won't start shipping until the end of the year. Therefore, it's likely Nvidia will continue to dominate this market for the foreseeable future.

Three months ago, Nvidia delivered its financial results for the fiscal 2024 first quarter (ended April 30), and they shocked investors. The company issued a forecast for the second quarter suggesting it could generate $11 billion in revenue, which was $4 billion higher than Wall Street analysts had anticipated.

With the second quarter now in the books, it appears even that number was conservative, because Nvidia wound up delivering a whopping $13.5 billion in revenue. The figure marked an increase of 101% from the same quarter last year, and the majority of that growth came from the data center segment, which, of course, was driven by demand for AI chips.

Nvidia's data center segment brought in $10.3 billion in sales, up 171% year over year and far above analysts' estimates of $8 billion.

In the same quarter just two years ago, the company's data center segment generated just $2.3 billion in revenue and it was smaller than its gaming business -- that shows how rapidly the demand for AI chips is growing.

Now, Nvidia has issued another round of blockbuster guidance for the upcoming fiscal 2024 third quarter, telling investors it expects to deliver $16 billion in revenue. That's much higher than the $12.6 billion Wall Street was anticipating.

Plus, the strong Q2 top-line result led to an 843% year-over-year surge in Nvidia's net income (profit), to $6.2 billion, translating to $2.48 in earnings per share. As the company continues to scale its data center business, its gross profit margin should continue to improve, which will allow more cash to flow to its bottom line.

Make no mistake -- Nvidia stock is incredibly expensive by all traditional metrics. Based on the company's trailing-12-month non-GAAP (adjusted) earnings per share of $5.25, its stock trades at a price-to-earnings (P/E) ratio of 95.6. That's over 3 times more expensive than the Nasdaq-100 technology index, which features Nvidia's trillion-dollar peers, and trades at a P/E ratio of 30.1.

But the key here is Nvidia's growth. Investors will always pay a premium for a company that delivers triple-digit-percentage growth in revenue and earnings each quarter (year over year), because while its P/E ratio looks expensive today, it can appear very cheap when looking two, three, or even five years into the future.

That brings me back to AI. This technological revolution is just beginning, and as I mentioned earlier, Jensen Huang is eyeing a data center opportunity worth $1 trillion, of which Nvidia currently has a 90% share. But that's just the hardware side.

On the software side, Cathie Wood's Ark Investment Management thinks $8 in revenue will be generated for every $1 in chips Nvidia sells, which could equate to $14 trillion in total revenue across the AI software industry by 2030. Nvidia has built advanced software platforms like Omniverse, which businesses can use to build digital twins of real-life assets for infrastructure investment planning. Then there's Drive, Nvidia's fully autonomous self-driving platform it's selling to leading automotive manufacturers like Mercedes-Benz.

While most other tech giants are using AI to accelerate their own businesses, Nvidia is the key facilitator. Without this company's hardware, most AI strategies are dead in the water. Let's be realistic -- Nvidia is unlikely to maintain a 90% market share forever, but I think it will remain the leading innovator in the AI space. Even if its market share halves over the next decade, there will likely be enough value creation across the industry for the company to continue delivering generous returns to investors.

For that reason, I think Nvidia has a much longer growth runway than any of its peers in the trillion-dollar club. Its stock has the potential to supercharge any portfolio over the long term, so investors should definitely consider owning it.

Suzanne Frey, an executive at Alphabet, is a member of The Motley Fool's board of directors. John Mackey, former CEO of Whole Foods Market, an Amazon subsidiary, is a member of The Motley Fool's board of directors. Anthony Di Pizio has no position in any of the stocks mentioned. The Motley Fool has positions in and recommends Advanced Micro Devices, Alphabet, Amazon.com, Apple, Microsoft, Nvidia, and Oracle. The Motley Fool has a disclosure policy.

Read the rest here:
If I Had to Buy 1 Trillion-Dollar Artificial Intelligence Stock, This ... - The Motley Fool

Read More..

In Reversal Because of A.I., Office Jobs Are Now More at Risk – The New York Times

The American workers who have had their careers upended by automation in recent decades have largely been less educated, especially men working in manufacturing.

But the new kind of automation artificial intelligence systems called large language models, like ChatGPT and Googles Bard is changing that. These tools can rapidly process and synthesize information and generate new content. The jobs most exposed to automation now are office jobs, those that require more cognitive skills, creativity and high levels of education. The workers affected are likelier to be highly paid, and slightly likelier to be women, a variety of research has found.

Its surprised most people, including me, said Erik Brynjolfsson, a professor at the Stanford Institute for Human-Centered A.I., who had predicted that creativity and tech skills would insulate people from the effects of automation. To be brutally honest, we had a hierarchy of things that technology could do, and we felt comfortable saying things like creative work, professional work, emotional intelligence would be hard for machines to ever do. Now thats all been upended.

A range of new research has analyzed the tasks of American workers, using the Labor Departments O*Net database, and hypothesized which of them large language models could do. It has found these models could significantly help with tasks in one-fifth to one-quarter of occupations. In a majority of jobs, the models could do some of the tasks, found the analyses, including from Pew Research Center and Goldman Sachs.

For now, the models still sometimes produce incorrect information, and are more likely to assist workers than replace them, said Pamela Mishkin and Tyna Eloundou, researchers at OpenAI, the company and research lab behind ChatGPT. They did a similar study, analyzing the 19,265 tasks done in 923 occupations, and found that large language models could do some of the tasks that 80 percent of American workers do.

Yet they also found reason for some workers to fear that large language models could displace them, in line with what Sam Altman, OpenAIs chief executive, told The Atlantic last month: Jobs are definitely going to go away, full stop.

The researchers asked an advanced model of ChatGPT to analyze the O*Net data and determine which tasks large language models could do. It found that 86 jobs were entirely exposed (meaning every task could be assisted by the tool). The human researchers said 15 jobs were. The job that both the humans and the A.I. agreed was most exposed was mathematician.

Just 4 percent of jobs had zero tasks that could be assisted by the technology, the analysis found. They included athletes, dishwashers and those assisting carpenters, roofers or painters. Yet even tradespeople could use A.I. for parts of their jobs like scheduling, customer service and route optimization, said Mike Bidwell, chief executive of Neighborly, a home services company.

While OpenAI has a business interest in promoting its technology as a boon to workers, other researchers said there were still uniquely human capabilities that were not (yet) able to be automated like social skills, teamwork, care work and the skills of tradespeople. Were not going to run out of things for humans to do anytime soon, Mr. Brynjolfsson said. But the things are different: learning how to ask the right questions, really interacting with people, physical work requiring dexterity.

For now, large language models will probably help many workers be more productive in their existing jobs, researchers say, akin to giving office workers, even entry-level ones, a chief of staff or a research assistant (though that could signal trouble for human assistants).

Take writing code: A study of Githubs Copilot, an A.I. program that helps programmers by suggesting code and functions, found that those using it were 56 percent faster than those doing the same task without it.

Theres a misconception that exposure is necessarily a bad thing, Ms. Mishkin said. After reading descriptions of every occupation for the study, she and her colleagues learned an important lesson, she said: Theres no way a model is ever going to do all of this.

Large language models could help write legislation, for instance, but could not pass laws. They could act as therapists people could share their thoughts, and the models could respond with ideas based on proven regimens but they do not have human empathy or the ability to read nuanced situations.

The version of ChatGPT open to the public has risks for workers it often gets things wrong, can reflect human biases, and is not secure enough for businesses to trust with confidential information. Companies that use it get around these obstacles with tools that tap its technology in a so-called closed domain meaning they train the model only on certain content and keep any inputs private.

Morgan Stanley uses a version of OpenAIs model made for its business that was fed about 100,000 internal documents, more than a million pages. Financial advisers use it to help them find information to answer client questions quickly, like whether to invest in a certain company. (Previously, this required finding and reading multiple reports.)

It leaves advisers more time to talk with clients, said Jeff McMillan, who leads data analytics and wealth management at the firm. The tool does not know about individual clients and any human touch that might be needed, like if they are going through a divorce or illness.

Aquent Talent, a staffing firm, is using a business version of Bard. Usually, humans read through workers rsums and portfolios to find a match for a job opening; the tool can do it much more efficiently. Its work still requires a human audit, though, especially in hiring, because human biases are built in, said Rohshann Pilla, president of Aquent Talent.

Harvey, which is funded by OpenAI, is a start-up selling a tool like this to law firms. Senior partners use it for strategy, like coming up with 10 questions to ask in a deposition or summarizing how the firm has negotiated similar agreements.

Its not, Heres the advice Id give a client, said Winston Weinberg, a co-founder of Harvey. Its, How can I filter this information quickly so I can reach the advice level? You still need the decision maker.

He says its especially helpful for paralegals or associates. They use it to learn asking questions like: What is this type of contract for, and why was it written like this? or to write first drafts, like summarizing a financial statement.

Now all of a sudden they have an assistant, he said. People will be able to do work thats at a higher level faster in their career.

Other people studying how workplaces use large language models have found a similar pattern: They help junior employees most. A study of customer support agents by Professor Brynjolfsson and colleagues found that using A.I. increased productivity 14 percent overall, and 35 percent for the lowest-skilled workers, who moved up the learning curve faster with its assistance.

It closes gaps between entry-level workers and superstars, said Robert Seamans of N.Y.U.s Stern School of Business, who co-wrote a paper finding that the occupations most exposed to large language models were telemarketers and certain teachers.

The last round of automation, affecting manufacturing jobs, increased income inequality by depriving workers without college educations of high-paying jobs, research has shown.

A.I. could perhaps do this again for example, if senior managers called on large language models to do the work of junior staffers, potentially increasing the earnings of executives while displacing the jobs of those with less experience. But some scholars say large language models could do the opposite decreasing inequality between the highest-paid workers and everyone else.

My hope is it will actually allow people with less formal education to do more things, said David Autor, a labor economist at M.I.T., by lowering barriers to entry for more elite jobs that are well paid.

Read the original post:
In Reversal Because of A.I., Office Jobs Are Now More at Risk - The New York Times

Read More..

Impact and risks of artificial intelligence and its use by the board of … – JD Supra

Artificial Intelligence (AI) has become an inseparable part of our everyday lives. AI is used in Siri, facial recognition, navigation systems and search recommendations and advertisement algorithms to name just a few examples. We tend to take for granted the foregoing uses of AI as they have become commonplace. However, these AI systems are examples of Traditional AI the uses for which are limited and rather simplistic compared to cutting edge developments. New and more advanced versions of AI such as Generative AI (e.g., ChatGPT) are at the forefront of development and innovations in most industries and corporations. For example, a report from McKinsey & Company indicates that one-third of businesses are reportedly using Generative AI in at least one function.[i] As is relevant to this article, AI is becoming more commonly used as part of the decision making process by corporate boards and their directors and officers. This article considers: (1) The different classifications and types of AI; (2) how corporations and their boards are using AI; (3) the implications of its use; (4) potential impacts and emerging risks for D&O insurers; and (5) suggested regulatory frameworks for AI.

Levels of AI From Traditional to Generative and Assisted to Fully Autonomous

As previewed above, there are differing levels of AI. In terms of use in corporate decision making, commentators have generally characterized AI in levels of autonomy,[ii] which includes three levels: (1) Assisted; (2) Augmented; and (3) Autonomous. This scale categorizes AI on relative decision making ability, with Assisted simply providing support with administrative tasks with no real decision making function or impact, Augmented provides data and support to human decision makers which can impact (and hopefully improve) the human directors judgment decisions, and Autonomous where the decision making ability lays entirely with the AI system.[iii] At present, the technology appears to have moved beyond purely Assisted AI systems, but have not yet developed AI that would be capable or trusted to be fully Autonomous with corporate decision making. As such, and as discussed in the following section, it appears that Augmented AI is the most applicable level of AI automation to corporate decision makers at this time.

In addition to levels of autonomy, AI can also be characterized by the level of originality it can create. The foregoing examples of AI (Siri, navigation etc.) are forms of Traditional AI. These systems are designed with the capability to learn from data and make decisions or predictions based on that data.[iv] Traditional AI is constrained by the rules it is programmed to know. In comparison, Generative AI, which is at the cutting edge of AI developments, has the ability to create new and original pieces. For example ChatGPT and DALL-E are Generative AI systems that can create text or image outputs based on written requests or inputs, and create human-like content.[v] Applying these two types of AI to the levels of autonomy, it appears that Traditional AI may be used to analyze data and spot patterns and therefore be used in an Augmented way by corporate boards, whereas Generative AI has the potential to create new patterns and potentially become the autonomous decision maker.

Use of AI in the Board Room

Commentators and futurists seem assured that one day AI will join or replace human Directors and will be running corporations and making decisions autonomously. However, there appears to be general consensus that the current use of AI is best limited to a tool to assist Directors with decision making and not having AI as the ultimate decision maker. This may also be consistent with the current view of AI usage for other professional classes.[vi] While there are some well-known examples of purported AI Directors (e.g. VITAL, Tieto, and Salesforce)[vii] these certainly appear to be the exception, and in reality not truly autonomous.

Based on the current data, the most common use of AI still appears to be in data gathering, analysis, and reporting.[viii] One study involving executives from large US public corporations and private equity funds reported that 69% of the respondents are using AI tools as a part of due diligence.[ix] While not specified, it would appear that the respondents are using Traditional AI in an Augmented fashion. However, AI is being billed as a risk management tool, and is being used to monitor compliance and governance of the corporation by identifying vulnerabilities or patterns, detecting violations, and encouraging timely and accurate reporting.[x] There may be other uses of AI to make or assist in decision making that are not as publicized or that are in a truly experimental stage. These new uses will likely involve Generative AI.

Implications of Using AI in Decision Making: the Positive and Negative

From personal experience, AI often speeds up basic processes, such as paying for goods or predictive text and searches. The lure of the same time saving benefits are also present in the corporate board room. As illustrated above, Directors are using AI to synthesize and evaluate large quantities of information in short periods of time. This can allow for decision making based on larger and more comprehensive data sets than ever before. Further, AI has the ability to identify patterns and trends that may not be immediately obvious, therefore informing corporate strategy. These perceived benefits and characteristics can hopefully lead to a more informed board that is able to pursue multiple goals. Some commentators also suggest AI will lead to a more independent board because decisions are based on the neutral output of information and may give a stronger dissenting voice to independent directors whose positions may be supported by AI.

However, it is acknowledged that the output from AI is only as good as the data that is input, or the rules that govern how the AI functions. Both of these factors can be decided by and influenced by humans resulting in built-in biases. For example, companies using AI and historical data to hire officers and managers, found that AI favored certain applicants due to the built-in bias of the historical hiring data.[xi] Further, when considering Generative AI, that has the ability to create original work, the prompt or input the user applies can directly or inadvertently cause Generative AI to produce work that includes biases, errors, falsehoods, and even so-called AI hallucinations.[xii] AI hallucinations refer to made up or false information that AI systems posit as true. Further, Generative AI can present a black box problem, which refers to the inability of AI systems or experienced data scientists to explain or understand how certain outputs were reached based on the data input.[xiii]

Further, at the current stage of AI development, human judgment is still required, and therefore, the purported benefits of AI are naturally diluted.[xiv] Other potential risks include security and privacy concerns stemming from the vast amounts of data that is fed into AI systems. For example, Samsung banned use of ChatGPT after employees loaded sensitive company data onto the platform that subsequently leaked.[xv] Further, legal and regulatory frameworks in the US do not currently recognize non-human directors. Therefore, significant questions regarding legal liability are likely to present where AI takes a greater role in corporate decision making.

D&O Insurance Risks

How may the foregoing impact D&O insurers? The immediate impact appears to be associated with unknown legal liability. Where AI is informing or assisting in decisions of directors with varying level of oversight there is a question of who is responsible for subsequent liability. AI may also be used by boards to assess historical risks or evaluate losses. Some suggest that typical derivative actions could transform into actions also brought against AI software programmers or providers. Securities actions could arise as a result of undisclosed usage of AI in corporate decision making or inadequate disclosure of the risks that it may present. However, at present it would appear most likely that both new (AI developers) and traditional targets (Corporations and their Directors and Officers) would be pursued together which risks exposing additional policy proceeds and inflating demands and potential settlements. There also appears to be risk in the unknown, including the uncertainty of how breach of fiduciary standards will be applied when considering AI involvement in decision making. Ultimately, additional risks and areas of exposure are likely to develop as AI systems are given more autonomy over decision making.

Potential Regulatory Frameworks

Because of AIs relatively new use in the corporate landscape, and the fact much of its potential will be in now unknown future uses, including fully autonomous decision making, there is no clear regulatory or legal framework in place that will, or can, address issues presented by future AI developments. Some have suggested that AI could be regulated like the pharmaceutical industry, and be subject to rigorous testing prior to approval for market uses.[xvi] Alternatively, there have been suggestions that law makers develop clear laws paired with incentives to produce legal AI products before such technologies advance further. However, both frameworks would appear to risk stifling innovation and investment. Alternatively, certain academics have suggested a disclosure based regulatory approach, which seems similar to SEC regulations and disclosure obligations for US public companies. They suggest that such a framework would be most suitable because the cost would not unduly restrict innovation and investment in AI, yet the level of disclosure still provides the needed oversight in a developing industry.

Conclusion

Traditional AI is entrenched in everyday life and the technology has evolved significantly with Generative AI. This evolution is notable for businesses with Generative AI usage seemingly becoming widespread. This will present a number of ethical and legal issues on many fronts. For those in the D&O industry, developments in AI may also give rise to novel issues and increase potential risks. Inquiries to Insureds about the use of AI in its operations and in connection with the management of the entity will likely become more commonplace as such systems are gaining traction across a wide range of industries. Claims activity related to non-disclosure of AI risks or claims arising from the reliance on such technologies in decision making, even with the involvement of human intelligence, may also be worth monitoring.

[i] Lynch, Sarah, 2 Common Mistakes CEOs Might Be Making With Generative A.I. (August 11, 2023), Inc., Available at: https://www.inc.com/sarah-lynch-/2-common-mistakes-ceos-might-be-making-with-generative-ai.html

[ii] See Petrin, Martin, Corporate Management in the Age of AI (March 4, 2019). Columbia Business Law Review, Forthcoming, UCL Working Paper Series, Corporate Management in the Age of AI (No.3/2019) , Faculty of Laws University College London Law Research Paper No. 3/2019, Available at SSRN: https://ssrn.com/abstract=3346722; See also Mertens, Floris, The Use of Artificial Intelligence in Corporate Decision-Making at Board Level: A Preliminary Legal Analysis (January 27, 2023). Financial Law Institute Working Paper Series 2023-01, Available at SSRN:https://ssrn.com/abstract=4339413.

[iii] Id.

[iv] Marr, Benard, The Difference Between Generative AI And Traditional AI: An Easy Explanation For Anyone, (July 24, 2023) Forbes Available at:

https://www.forbes.com/sites/bernardmarr/2023/07/24/the-difference-between-generative-ai-and-traditional-ai-an-easy-explanation-for-anyone/?sh=7f39606c508a

[v] Marr, Bernard Marr, How To Stop Generative AI From Destroying The Internet, (August 14, 2023) Forbes, Available at:

https://www.forbes.com/sites/bernardmarr/2023/08/14/is-generative-ai-destroying-the-internet/?sh=4b56b4c597f6

[vi] LaCroix, Kevin, AI is Not Quite Ready to Replace the Lawyers (May 30, 2023), The D&O Diary, Available at: https://www.dandodiary.com/2023/05/articles/blogging/ai-is-not-quite-ready-to-replace-the-lawyers/

[vii] Petrin, Corporate Management in the Age of AI (Note ii).

[viii] Mertens, The Use of Artificial Intelligence in Corporate Decision-Making at Board Level: A Preliminary Legal Analysis (Note ii).

[ix] Id.

[x] Bruner, Christopher M., Artificially Intelligent Boards and the Future of Delaware Corporate Law (September 22, 2021). University of Georgia School of Law Legal Studies Research Paper No. 2021-23, Available at SSRN: https://ssrn.com/abstract=3928237.

[xi] Earley, Sen & Zivin, Sparky, The Rise of the AI CEO:A Revolution in Corporate Governance (March 8, 2023), Available at: https://www.teneo.com/the-rise-of-the-ai-ceo-a-revolution-in-corporate-governance/

[xii] Eliot, Lance, Prompt Engineering Deciphers Riddle Of Show-Me Versus Tell-Me As Choice Of Best-In-Class Prompting Technique For Generative AI, (August 12, 2023) Forbes, Available at:

https://www.forbes.com/sites/lanceeliot/2023/08/12/prompt-engineering-solves-riddle-of-show-me-versus-tell-me-as-choice-of-best-in-class-prompting-technique-for-generative-ai/?sh=661e70ed53d9

[xiii] Blackman Reid, Generative AI-nxiety, (August 14, 2023) Harvard Business Review, Available at: https://hbr.org/2023/08/generative-ai-nxiety

[xiv] Ajay Agrawal et al., What to Expect From Artificial Intelligence, 58 MITSLOAN MANAGEMENT REVIEW (2017), at 26, http://ilp.mit.edu/media/news_articles/smr/2017/58311.pdf.

[xv] Blackman, Generative AI-nxiety (Note xii).

[xvi] Kamalnath, Akshaya and Varottil, Umakanth, A Disclosure-Based Approach to Regulating AI in Corporate Governance (January 7, 2022). NUS Law Working Paper No. 2022/001, Available at SSRN:https://ssrn.com/abstract=4002876.

Original post:
Impact and risks of artificial intelligence and its use by the board of ... - JD Supra

Read More..

Seeking to boost your career in A.I.? These 4 MBA programs offer … – Fortune

Artificial intelligence (A.I.) is slowly integrating into every aspect of societyfrom classrooms and hospitals to automobiles and farm fields. As its use grows beyond tools like ChatGPT, so does the need for experts with the skills to develop and harness A.I. and machine learning programs.

Recent job numbers show that growth in action. According to the World Economic Forums 2023 Future of Jobs Report, the demand for A.I. and machine learning specialists is expected to increase at a rate of 40%faster than any labor force. That translates to an estimated 1 million new jobs over the next five years.

As a result, jumping into a career in A.I. at a start-up or large company may be more possible than ever. Data from LinkedIn shows that A.I. skills are among the most-sought after at many Fortune 500 tech companies, such as Amazon, Apple, and IBM. And, among companies with more than 50,000 employees, A.I. and big data rank as the top priority for training strategies over the next five years, according to the WEF report.

This also means opportunities for sizable salaries. Estimates from Glassdoor put the average annual salary of an A.I. engineer at $120,000. Many companies have also created new executive-level roles for A.I. leaders, such as chief A.I. officers.

So how can you land a leadership role in A.I.? One pathway is obtaining an A.I.-focused MBA degree.

At many business schools, the intersection of leadership and A.I. is becoming much more common. Thats because of A.I.s continued integration across businessfrom research to customer service, says Manuel Nuez, associate dean of graduate programs at the Villanova School of Business. It is among the few schools that offer specialized MBA programs in A.I.

Its clear that A.I. is infused into almost every part of the business ecosystem, Nuez tells Fortune.

Because of the in-demand skills, every MBA candidateno matter their specializationwill be exposed to A.I. and machine learning at Villanova, he adds. This may include immersion in foundational coursework like studying predictive and generative A.I. as well as applied coursework in the business problem-solving space. Students will also importantly weigh the social and ethical implications of A.I., Nuez says.

Like Villanova, Northwestern University offers a combined business and A.I. program called an MBAi. Eric Anderson is the director of the program.

While A.I. and analytics hold tremendous promise, most firms continue to struggle with delivering and scaling successful business outcomes, he says in the press release during the MBAis launch.

As the world accepts the future of A.I., the real-world skills in machine learning, language processing, and data management obtained through an A.I.-concentrated MBA could help propel you as a leader in A.I. engineering, project managing, or consulting. Fortune has made it a little easier for you and compiled a list of some of the program opportunities.

Read more:
Seeking to boost your career in A.I.? These 4 MBA programs offer ... - Fortune

Read More..

Dentaly explores use of artificial intelligence in dentistry – Digital Health

A recent survey from Dentaly has shed light on how the use of artificial intelligence (AI) within the sector is transforming dentistry and having an impact on patient care.

The survey revealed that already 35% of the licensed dentists questioned have implemented the technology within their practice. And of those who have deployed AI, 77% agree that they have seen positive results from it.

Overall 81% of the US dentists had a positive attitude towards AI applications in dentistry, with 62% agreeing that some of the operational tasks in dental clinics could be carried out by AI. The respondents felt that AI held value for enhancing their clinical practice, streamlining workflow and improving patient outcomes.

The most significant application for AI for those dentists who had already implemented the technology was image analysis (51%), followed by diagnosis (43%) and treatment planning (38%).

The top-ranking benefits of AI in dentistry were named by the Dentaly report as: faster and more efficient workflows (76%), predictive analytics for patient outcomes (48%), improved accuracy in diagnosis (40%), enhanced treatment planning (35%) and personalised treatment options (20%).

Despite the potential AI holds to revolutionise the industry, there are still concerns particularly over the privacy and security of both patient data and their own practice. Dental practitioners expressed concerns about the potential breach of relevant laws and how this could compromise their practice.

While data privacy and security top the list of concerns (67%), it jointly shares this position with worries over the reliability and accuracy of AI systems. Other concerns include ethical considerations (52%), the cost of implementing AI technologies (43%) and potential job displacement (33%).

Brendan Macdonald, CEO, Digital Smile Design, said: Overall generative AI has the potential to transform the dental industry by improving the accuracy and efficiency of diagnosis and treatment planning, streamlining care and automating routine tasks. However, there are still limitations to the widespread application of AI in dentistry, and more research and development are needed to improve the accuracy and reliability of AI algorithms.

AI cannot replace dental professionals; the solutions and systems are far from being able to do. AI should be viewed as a complementary asset to assist dental professionals in their work.

When it comes to understanding how AI will impact the future of dentistry, over half (52%) felt that they would see incremental improvements in current practices. Close to a third (29%) felt there would be limited impact on dentistry, while 10% felt they would see revolutionary changes in the field.

Overall, amongst the respondents, there was a general openness to exploring and incorporating AI advancements within their practices. Ten percent said they were very likely to adopt AI technologies, while a further 47% said they were somewhat likely. None of the respondents perceived it was very unlikely they would implement the tech within the next five years.

One year ago Clyde Munro Dental Group became the first dental service in Scotland to trial AI from Manchester Imaging, to improve the accuracy of the prevention and diagnosis of early tooth decay.

Excerpt from:
Dentaly explores use of artificial intelligence in dentistry - Digital Health

Read More..

31% of investors are OK with using artificial intelligence as their advisor – CNBC

Jakub Porzycki/NurPhoto via Getty Images

Nearly 1 in 3 investors would use artificial intelligence as their financial advisor, a new survey suggests and that has the potential to lead to flawed advice, experts said.

Specifically, 31% of investors queried would be comfortable implementing financial advice from a generative AI program without first verifying those recommendations with another source, according to a poll by the Certified Financial Planner Board of Standards, the body that governs the CFP designation for financial advisors.

"It is a bit concerning," said Kevin Keller, CEO of the CFP Board.

In simple terms, AI is technology that aims to simulate human intelligence. Generative AI uses algorithms to create new content like essays, song lyrics, art, photography and computer code or, in this case, financial advice.

ChatGPT, a program that went viral after being debuted to the public late last year, is one example of generative AI.

More from Personal Finance:Interest rates, inflation push Gen Z to trade on emotion5 cities with the highest property tax rates81% of full-time workers want a 4-day work week

Would-be financial-advice recipients can use such programs to ask financial questions or prompts.

Consider this sample prompt from Keller: "Create an asset allocation for a 62-year-old male investor who is moderately risk tolerant."

The algorithms that underpin generative AI compile data from sources like the internet to develop responses, and those data sources may not be reliable. The quality of the results depend on the quality of the model, according to McKinsey & Co.

"The outputs aren't always accurate or appropriate," the consulting firm wrote of generative AI.

"For its part, ChatGPT seems to have trouble counting, or solving basic algebra problems or, indeed, overcoming the sexist and racist bias that lurks in the undercurrents of the internet and society more broadly," it added.

In short, financial advice outputs won't necessarily be 100% trustworthy.

Of course, technology and algorithms aren't new for investors nor is the skepticism surrounding that technology.

So-called robo-advisors, which use algorithms to automate asset allocations for investors, began popping up around the time of the 2008 financial crisis. They've grown in popularity, inspiring questions as to whether they can deliver advice on par with human financial advisors.

Investors especially those with relatively complicated financial lives a face an additional hurdle with AI: Engaging with it becomes difficult if someone doesn't know what questions to ask in the first place, wrote Michael Kitces, a CFP and head of planning strategy at Buckingham Wealth Partners.

"Have you tried logging into ChatGPT to ask it questions only to find yourself sitting there wondering, 'What should I ask an AI chatbot?' Kitces said. "Now imagine that feeling again, but this time you have to ask it the right question because your financial life savings are on the line."

It's the Wild West out there.

Kevin Keller

CEO of the CFP Board

Perhaps counterintuitively, young investors seem more wary about AI outputs than older investors: 62% of investors age 45 and older said they were "very satisfied" with getting financial advice from generative AI, versus 38% of investors under 45, according to the CFP Board poll.

Yet older investors who may be in or near retirement are generally the ones with more complex finances and in need of more tailored advice, experts said.

Ultimately, there have always been do-it-yourself investors, and there will continue to be, Keller said. Those who leverage AI for financial advice should "trust but verify," he said.

"It's the Wild West out there," he added.

View post:
31% of investors are OK with using artificial intelligence as their advisor - CNBC

Read More..