Page 1,317«..1020..1,3161,3171,3181,319..1,3301,340..»

The world’s largest AI fund has surged 23% this year, beating even the red-hot Nasdaq index – Yahoo Finance

The artificial intelligence sector has seen a boom in investor interest with the rise of ChatGPT.NanoStockk/Getty Images

The Global X Robotics & Artificial Intelligence ETF, the largest AI fund in the world, is up 23% so far in 2023.

This has included $135 million of inflows so far in 2023, including $80 million in March, according to data compiled by Bloomberg.

More than half of professional investors plan to add the AI theme to their portfolios this year, a new survey by Brown Brothers Harriman found.

The rise of ChatGPT has spurred a renewed spike in investor interest in the artificial intelligence sector. That's led the world's largest AI fund, the Global X Robotics & Artificial Intelligence ETF (BOTZ), to a stronger start in 2023 than even the red-hot Nasdaq 100.

The $1.7 billion ETF has gained 23%, while the Nasdaq 100, coming off its second-strongest quarter in a decade, is up 19%.

The fund's top holding is Nvidia, which was the top-performing name in both the S&P 500 and more tech-heavy Nasdaq 100 during the first quarter. The chipmaker, which makes up roughly 9% of the ETF's net assets, has climbed 88% in 2023. Further, lesser-weighted fund members like C3.ai and South Korea-based Rainbow Roboticshave seen their stocks soar more than 200% this year.

Amid the strong fund returns, BOTZ has seen $135 million of inflows so far in 2023, including $80 million in March, according to data compiled by Bloomberg. A new survey from Brown Brothers Harriman suggests the trend toward AI will continue.

Among 325 professional investors, 56% plan to add AI- and robotics-themed exposure to their portfolios this year, the survey found. That compares to 46% in 2022, and the category beat out all others except internet and technology.

Jan Szilagyi, the CEO of AI-powered market analytics platform Toggle AI, said he's more bullish on the sector now than even before the banking turmoil rattled financial markets in March.

Top players in finance continue to give tools like ChatGPT plenty of attention, he's encouraged by the rapid progress seen across large language models.

"For the moment, most of the technology's promise is still in the future," Szilagyi told Insider on Monday. "The leap between GPT 3.5 and GPT 4 shows that we are still early in the upgrade curve. This technology is going to see dramatic improvement in the coming years."

Read the original article on Business Insider

Excerpt from:
The world's largest AI fund has surged 23% this year, beating even the red-hot Nasdaq index - Yahoo Finance

Read More..

A freeze in training artificial intelligence won’t help, says professor – Tech Xplore

This article has been reviewed according to ScienceX's editorial process and policies. Editors have highlighted the following attributes while ensuring the content's credibility:

fact-checked

trusted source

proofread

Credit: Pixabay/CC0 Public Domain

The development of artificial intelligence (AI) is out of control, in the opinion of approximately 3,000 signatories of an open letter published by business leaders and scientists.

The signatories call for a temporary halt to training especially high-performance AI systems. Prof. Urs Gasser, expert on the governance of digital technologies, examines the important questions from which the letter deflects attention, talks about why an "AI technical inspection agency" would make good sense and looks at how far the EU has come compared to the U.S. in terms of regulation.

Artificial intelligence systems capable of competing with human intelligence may entail grave risks for society and humanity, say the authors of the open letter. Therefore, they continue, for at least six months no further development should be conducted on technologies which are more powerful than the recently introduced GPT-4, successor to the language model ChatGPT.

The authors call for the introduction of safety rules in collaboration with independent experts. If AI laboratories fail to implement a development pause voluntarily, governments should legally mandate the pause, says the signatories.

Unfortunately the open letter absorbs a lot of attention which would be better devoted to other questions in the AI debate. It is correct to say that today probably nobody knows how to train extremely powerful AI systems in such a way that they will always be reliable, helpful, honest and harmless.

Nonetheless, a pause in AI training will not help achieve this, primarily because it would be impossible to assert such a moratorium on a global level, and because it would not be possible to implement the regulations called for within period of only six months. I'm convinced that what's necessary is a stepwise further development of technologies in parallel to the application and adaptation of control mechanisms.

First of all, the open letter once again summons up the specter of what is referred to as an artificial general intelligence. That deflects attention from a balanced discussion of the risks and opportunities represented by the kind of technologies currently entering the market. Second, the paper refers to future successor models of GPT-4.

This draws attention away from the fact that GPT-4's predecessor, ChatGPT, already presents us with essential challenges that we urgently need to addressfor example misinformation and prejudices which the machines replicate and scale. And third, the spectacular demands made in the letter distract us from the fact that we already have instruments now which we could use to regulate the development and use of AI.

Recent years have seen the intensive development of ethical principles which should guide the development and application of AI. These have been supplemented in important areas by technical standards and best practices. Specifically, the OECD Principles on Artificial Intelligence link ethical principles with more than 400 concrete tools.

And the US National Institute of Standards and Technology (NIST) has issued a 70-page guideline on how distortions in AI systems can be detected and handled. In the area of security in major AI models, we're seeing new methods like constitutional AI, in which an AI system "learns" principles of good conduct from humans and can then use the results to monitor another AI application. Substantial progress has been made in terms of security, transparency and data protection and there are even specialized inspection companies.

Now the essential question is whether or not to use such instruments, and if so how. Returning to the example of ChatGPT: Will the chat logs of the users be included in the model for iterative training? Are plug-ins allowed which can record user interactions, contacts and other personal data? The interim ban and the initiation of an investigation of the developers of ChatGPT by the Italian data protection authorities are signs that very much is still unclear here.

The history of technology has taught us that it is difficult to predict the "good" or "bad" use of technologies, even that technologies often entail both aspects and negative impacts can often be unintentional. Instead of fixating on a certain point in a forecast, we have to do two things: First, we have to ask ourselves which applications we as a society do not want, even if they were possible. We need clear red lines and prohibitions.

Here I'm thinking of autonomous weapons systems as an example. Second, we need comprehensive risk management, spanning the range from development all the way to use. The demands placed here increase as the magnitude of the potential risks to people and the environment posed by a given application grow. European legislature is correct in taking this approach.

This kind of independent inspection is a very important instrument, especially when it comes to applications that can have a considerable impact on human beings. And by the way, this is not a new idea: we already see inspection procedures and instances like these at work in the wide variety of aspects of life, ranging from automobile inspections to general technical equipment inspections and financial auditing.

However, the challenge is disproportionally greater with certain AI methods and applications, because certain systems develop themselves as they are used, i.e. they are dynamic in nature. And it's also important to see that experts alone won't be able to make a good assessment of all societal impacts. We also need innovative mechanisms which for example include disadvantaged people and underrepresented groups in the discussion on the consequences of AI. This is no easy job, one I wish was attracting more attention.

We do indeed need clear legal rules for artificial intelligence. At the EU level, an act on AI is currently being finalized which is intended to ensure that AI technologies are safe and comply with fundamental rights. The draft bill provides for the classification of AI technologies according to the threat they pose to these principles, with the possible consequence of prohibition or transparency obligations.

For example, plans include prohibiting evaluation of private individuals in terms of their social behavior, as we are currently seeing in China. In the U.S. the political process in this field is blocked in Congress. It would be helpful if the prominent figures who wrote the letter would put pressure on US federal legislators to take action instead of calling for a temporary discontinuation of technological development.

The rest is here:
A freeze in training artificial intelligence won't help, says professor - Tech Xplore

Read More..

Artificial Intelligence Becomes a Business Tool CBIA – CBIA

The growth of artificial intelligence is impossible to ignore, and more businesses are making it part of their operations.

In a recent Marcum LLP-Hofstra University survey, 26% of CEOs responded that their companies have used AI tools.

CEOs said they use AI for everything from automation, to predictive analytics, financial analysis, supply chain management and logistics, risk mitigation, and optimizing customer service.

Another 47% of CEOs said they are exploring how AI tools can be used in their operations.

Only 10% said they dont envision utilizing AI tools, and 16% were uncertain whether it would be relevant for their business.

The survey, conducted in February, polled 265 CEOs from companies with revenues ranging from $5 million to more than $1 billion.

58% of CEOs surveyed said that expectations and demands from their customers and clients increased in the last year.

CEOs said those expectations include more personalized service, immediate response times, more technology, and refusing price increases.

CEOs are challenged to meet higher expectations from customers.

Now that the pandemic economy is behind us and companies have resumed full operation, CEOs are challenged to meet higher expectations from customers, said Jeffrey Weiner, Marcums chair and CEO.

This certainly includes figuring out how to deploy new tools such as artificial intelligence to effectively position their companies for the future.

When asked about business planning in the next 12 months, economic concerns (53%), availability of talent (48%), and rising material/operational costs (43%) were the top three most important influences for CEOs.

There is some growing optimism among CEOs, with 33% responding that they are very concerned that the economy will experience a recession in the coming year.

That number is down from 47% in Marcums November 2022 survey.

54% of CEOs said they were somewhat concerned about a recession, compared with 43% in November.

I think the uptick in CEO optimism is a reflection not only of their feelings about the economy, but their confidence in their own ability to be flexible and meet the moment.

84% said they had a positive overall outlook on the business environment.

I think the uptick in CEO optimism is a reflection not only of their feelings about the economy, said Janet Lenaghan, dean of Hofstra Universitys Zarb School of Business, but their confidence in their own ability to be flexible and meet the moment, something they had to learn to get through COVID-19.

The survey also asked CEOs about leadership succession, calling it an essential process for ensuring business continuity, retaining talent, and developing future leaders.

Most CEOs (79%) said their companies have a succession plan in place, but only 45% were very confident in that plan.

41% of CEOs of companies without a succession said it wasnt a priority for their companies.

The Marcum-Hofstra survey is conducted periodically by Hofstra MBA students as a way to gauge mid-market CEOs outlook and priorities for the next 12 months.

Originally posted here:
Artificial Intelligence Becomes a Business Tool CBIA - CBIA

Read More..

C3.ai Stock: 3 Reasons to Avoid This Hot Artificial Intelligence … – The Motley Fool

Everyone is talking about artificial intelligence (AI) these days. Thanks to the breakthrough of ChatGPT, tech CEOs and pundits alike are convinced that artificial intelligence, in particular generative AI, will be the next major computing platform.

Unfortunately for investors, pure-play AI stocks are hard to come by on the stock market, making it hard to know how to capitalize on this opportunity. That's a major reason why C3.ai(AI 0.89%) has attracted so much attention on Wall Street. It's one of the few AI stocks available to investors with a model using a software-as-a-service platform to deliver AI for enterprise solutions for customers.

As a result of that surge of interest in artificial intelligence, C3.ai stock nearly tripled through the first three months of the year. Before you jump on the bandwagon with the high-flying AI stock, you should be aware of the drawbacks it's facing. Here are three reasons to avoid the stock at the moment.

Image source: Getty Images.

The hype around AI and the attention on C3.ai, in particular, might make you think that this is a fast-growing software company, but its recent results show that's anything but the case.

C3.ai reported a decline in revenue in the fiscal third quarter, its most recent period, showing it's facing the same kind of challenges as most of the tech sector.In Q3, revenue fell 4.4% year over year to $66.7 million. This was partly due to the company's decision to change its business model from subscription-based to consumption-based, which has created some noise in the results.

Revenue is expected to decline slightly in the current quarter as well. But management said revenue growth would accelerate in fiscal 2024 due to drivers like the launch of its generative AI platform, increased interest in the consumption-based model, and new and expanded partnerships with businesses like Alphabet's Google Cloud.

C3.ai is also losing money. It's on track for an adjusted operating loss of $69 million to $73 million this year, but management expects the company to be cash flow positive and profitable on an adjusted basis by the end of 2024.

Those are big promises from a company that has struggled with execution, including the business model issue. And given the macroeconomic climate, investors shouldn't assume it will hit that guidance.

Most software companies tend to receive a range of interest across multiple industries, but C3.ai has struggled with diversifying its revenue sources.

In fiscal 2022, 31% of its revenue came fromBaker Hughes, the oilfield services company with which it has a strategic partnership, and its top three customers last year accounted for 57% of accounts receivables, a proxy for revenue.

In its most recent quarter, 72% of its bookings came from the oil and gas sector. That makes it particularly vulnerable to a crash in oil prices, which is likely in a global recession as oil prices are highly cyclical.

The company has a "lighthouse" strategy of tapping into new industries by landing a flagship customer in that sector and then expanding to other customers in that industry from there. But while C3.ai also serves industries like banking, utilities, defense, and manufacturing, that revenue hasn't been sufficient to diversify the business away from oil and gas.

The company finished its most recent quarter with 236 customers, though it's hopeful the consumption-based model can bring in more smaller accounts.

The stock's tripling in the first quarter was based almost entirely on hype around artificial intelligence rather than any improvement in the fundamentals. Shares also got a boost at the end of January after C3.ai announced its new generative AI product suite, though it doesn't appear to be generally available yet.

However, after the current run-up in the price, the stock now trades at a price-to-sales ratio of 15. Through the first three quarters of the fiscal year, the company's lost $217 million on $174 million in revenue, indicating it's a long way from being profitable on a generally accepted accounting principles (GAAP) basis.

Given those financials, investors seem to be bidding the stock higher on nothing more than the company's growth promises and vague notions about the transformative potential of AI.

At this point, a bet on C3.ai seems like more of a lottery ticket on artificial intelligence rather than a rational investment in a company whose future cash flows justify its current price.

After the collapse in tech stocks over the last year, investors should know better.

Suzanne Frey, an executive at Alphabet, is a member of The Motley Fools board of directors. Jeremy Bowman has no position in any of the stocks mentioned. The Motley Fool has positions in and recommends Alphabet. The Motley Fool recommends C3.ai. The Motley Fool has a disclosure policy.

The rest is here:
C3.ai Stock: 3 Reasons to Avoid This Hot Artificial Intelligence ... - The Motley Fool

Read More..

Lincoln musician says artificial intelligence will not replace artists – KLKN

LINCOLN, Neb. (KLKN) Artificial intelligence can create images, write essays and collect data.

But will it ever replace musicians?

Matt Waite, a professor at the University of Nebraska-Lincoln, said AI predicts whats coming next.

With language models like Chat GPT, its looking at enormous amounts of text, he said. Its looking at how works are put together, and then essentially, its making a prediction.

Waite said several companies pull data from across the web to assist AI in creating that prediction.

But what happens when an artists style is portrayed by AI?

Newly launched campaigns, such as the Human Artistry Campaign, have already banded together to address challenges presented by AI.

Local musician Darren Keen thinks AI-generated content will not be a replacement for artists.

I think that eventually, these things will parse themselves out to be more like tools than full-on replacements for musicians and creative people, he said.

At this time, Waite says its unclear how AI will impact the world of music, media and education.

Were going to be making adjustments for years and years, he said. This is a significant moment in society where were going to remember the time before AI and the time after AI.

View post:
Lincoln musician says artificial intelligence will not replace artists - KLKN

Read More..

Why does Artificial Intelligence Needs Regulation? – Analytics Insight

The following is information regarding the need for regulations in artificial intelligence

This is the world that Artificial Intelligence (AI) and tens of millions of video cameras installed in both public and private areas are making possible. AI-amplified surveillance can not only identify you and your friends, but it can also track you using other biometric characteristics, like your gait, and even find clues about how you feel.

Although advancements and regulations in Artificial Intelligence (AI) promise to transform sectors like health care, transportation, logistics, energy production, environmental monitoring, and the media, serious concerns remain regarding how to prevent state actors from abusing these potent tools. Any AI regulation and rules that must be followed would contribute to human rights violations. Regulations in artificial intelligence will help lives.

Nowhere to run away: Building safe urban communities with innovation empowering influences and computer-based intelligence, a report by the Chinese infotech organization Huawei, expressly commends this vision of inescapable government observation. Selling AI as its Protected City arrangement, Thats what the organization gloats by breaking down individuals conduct in video film and drawing on other government information like personality, financial status, and circle of colleagues, simulated intelligence could rapidly recognize signs of wrongdoings and anticipate possible crime.

To keep an eye on what its citizens are doing in public places, more than 500 million surveillance cameras have already been installed in China. A lot of them are facial recognition cameras that automatically identify drivers and pedestrians and compare them to national blacklists and photo and license tag ID registries. This kind of surveillance finds political demonstrations as well as crimes. People who took part in COVID-19 lockdown protests, for instance, were recently detained and questioned by Chinese police using this kind of data.

There are currently about 85 million video cameras in both public and private areas in the United States. An ordinance that allows police to request access to private live feeds was recently passed in San Francisco. American retail stores, sports arenas, and airports are increasingly employing real-time facial recognition technology.

Woodrow Hartzog, a professor at Boston University School of Law, and Evan Selinger, a philosopher at the Rochester Institute of Technology, contend that facial recognition is the ideal instrument for oppression. The most uniquely dangerous surveillance mechanism ever invented, they write. Our faces would be transformed into permanent identification cards by real-time facial recognition technologies, which would be displayed to the police. The use of algorithms to identify people impeccably suited to authoritarian and rough ends is made possible by advances in artificial intelligence, wide videotape, and print surveillance, dwindling costs of storing big data sets in the pall, and affordable access to sophisticated data analytics systems, they point out.

The 2019 Albania Declaration, which calls for a halt to the use of facial recognition for mass surveillance, has been inked by further than 110 non-governmental associations. The Electronic Frontier Foundation, the electronic sequestration Information Center, Fight for the Future, and Restore the Fourth are among the associations from the United States that have inked a solicitation prompting countries to suspend the further deployment of facial recognition technology for mass surveillance.

In 2021, the Workplace of the Unified Countries High Chief for Common freedoms gave a report taking note that the far and wide use by States and associations of man-made knowledge, including profiling, robotized direction, and AI advances, influences the delight in the right to protection and related boons. Until its assumed that their use cannot violate mortal rights, the report prompted governments to put doldrums on the use of potentially high-threat technology, similar to remote real-time facial recognition.

The European Digital Rights network published a notice of the proposed AI Act for the European Unions regulation of remote biometric identification this time. Being followed in a public space by a facial acknowledgment frame (or another biometric frame) is on a veritably introductory position negative with the quintessence of informed assent, the report brings up. Youre needed to assent to biometric processing if you wish or need to enter that public space. Thats coercive and inharmonious with the pretensions of the EUs mortal rights governance (particularly the rights to sequestration and data protection, freedom of speech, freedom of assembly, and frequent discrimination).

We run the threat of accidentally sliding into turnkey despotism if we dont outlaw government agents use of AI. enabled real-time facial recognition surveillance.

Crazy scripts live in which moment is the last chance to forestall Armageddon. still, now isnt the time to regulate AI within the realm of reason.

More:
Why does Artificial Intelligence Needs Regulation? - Analytics Insight

Read More..

Former Google CEO Eric Schmidt is worried about artificial intelligence. Here’s why | Mint – Mint

Former Google CEO and Chairman Eric Schmidt has warned about the dangers of new-age artificial intelligence technology. Speaking to ABC This Week, Schmidt said there is a need to 'make sure this stuff (Large Language Models) doesn't harm but just help'.

Former Google CEO and Chairman Eric Schmidt has warned about the dangers of new-age artificial intelligence technology. Speaking to ABC This Week, Schmidt said there is a need to 'make sure this stuff (Large Language Models) doesn't harm but just help'.

On being asked to explain the perils and promise of AI, Schmidt replied Well imagine a world where you have an AI doctor that makes everyone healthier in the whole world. Imagine a world where you have an AI tutor that increases the educational capabilities of everyone in every language. These are remarkable and these technologies which are known as Large Language Models are clearly going to do this."

On being asked to explain the perils and promise of AI, Schmidt replied Well imagine a world where you have an AI doctor that makes everyone healthier in the whole world. Imagine a world where you have an AI tutor that increases the educational capabilities of everyone in every language. These are remarkable and these technologies which are known as Large Language Models are clearly going to do this."

However, the former Google CEO was quick to point out the threats that humanity faces from these language models.

However, the former Google CEO was quick to point out the threats that humanity faces from these language models.

We face extraordinary new challenges from these things, whether it's deep fakes or people falling in love with their AI tutor," he added.

We face extraordinary new challenges from these things, whether it's deep fakes or people falling in love with their AI tutor," he added.

Elaborating on the things that make him worried Schmidt added that he is worried about the use of LLMs in biology, cyber-attacks and manipulating the way politics works.

Elaborating on the things that make him worried Schmidt added that he is worried about the use of LLMs in biology, cyber-attacks and manipulating the way politics works.

Schmidt also pointed out the speed at which these new artificial intelligence technologies are changing the world, noting that it took Gmail five years to reach 100 million daily active users, while ChatGPT reached the same milestone in about 2 months.

Schmidt also pointed out the speed at which these new artificial intelligence technologies are changing the world, noting that it took Gmail five years to reach 100 million daily active users, while ChatGPT reached the same milestone in about 2 months.

This is not the first time that Schmidt has raised the possibility. During an earlier interaction with author and journalist Walter Isaacson he had noted that large language models could be used for biological warfare and change the dynamics of war.

Read the original post:
Former Google CEO Eric Schmidt is worried about artificial intelligence. Here's why | Mint - Mint

Read More..

Unrestricted Artificial Intelligence Growth Might Lead to Extinction of … – Transcontinental Times

UNITED STATES: The unchecked and rapid development of artificial intelligence (AI) is highly irresponsible and could result in a superhumanly intelligent AI wiping out all sentient life on Earth.

This is the warning issued by Machine Intelligence Research Institute decision theorist Eliezer Yudkowsky, who recently penned an alarming article for Time Magazine about the potentially catastrophic consequences of the current AI race among major tech players.

- Advertisement -

Yudkowsky is a prominent figure in the field of AI and is known for popularising the concept of friendly AI. However, his current outlook on the future of Artificial Intelligence is dystopian and echoes the worlds of science fiction films.

In a recent article, Yudkowsky highlighted the need to curb the development of Artificial Intelligence and ensure that it does not exceed human intelligence. He also emphasised the importance of ensuring that AI systems care for biological life and do not pose a threat to it.

- Advertisement -

The Centre for Artificial Intelligence and Digital Policy also recently issued a letter urging regulators to halt further commercial deployment of new generations of the GTP language model created by OpenAI.

The letter carried 1,000 signatures from technology experts and prominent figures, including Elon Musk. It called for a six-month pause on GPT-4s commercial activities and plans to ask the United States Federal Trade Commission (FTC) to investigate whether the commercial release of GPT-4 violated US and global regulations.

- Advertisement -

Yudkowsky applauded the letters request for a moratorium and expressed respect for individuals who had signed it, but he thinks it downplayed the gravity of the problem.

He emphasised that the key issue is not human-competitive intelligence but what happens after AI surpasses human intelligence.

Yudkowsky pointed out that humanity is not prepared for AIs capabilities and is not on course to be prepared for them within any reasonable time window.

Progress in AI capabilities is far ahead of progress in AI alignment or even understanding what is going on inside these systems.

He cautioned that if we continue along this path, virtually everyone on Earth will perish as a result of the most likely outcome of creating a superhumanly intelligent AI under conditions even somewhat similar to the ones we currently face.

Precision, readiness, fresh scientific understandings, and avoiding AI systems made up of huge, incomprehensible arrays of fractional numbers are all necessary for survival under artificial intelligence.

According to Yudkowsky, AI could potentially be built to care for humans or sentient life in general, but it is currently not understood how this could be achieved.

Without this caring factor, AI would not love or hate humans but would rather see them as consisting of atoms that could be used for something else.

The likely result of humanity facing down a superhuman intelligence would be a total loss.

The concerns raised by Yudkowsky and the Centre for Artificial Intelligence and Digital Policy are significant and should be taken seriously.

While AI has the potential to bring about many benefits, it is essential to ensure that its development is carefully monitored to avoid catastrophic consequences.

Also Read: Simpsons Paradox Explained: The Paradox That Flips Statistics on Its Head

Mechanical engineering graduate, writes about science, technology and sports, teaching physics and mathematics, also played cricket professionally and passionate about bodybuilding.

View all posts

Continued here:
Unrestricted Artificial Intelligence Growth Might Lead to Extinction of ... - Transcontinental Times

Read More..

Can an Artificial Intelligence Model Be Built to Closely Mimic the … – NYU Langone Health

Collaboration and innovation are at the heart of research endeavors at NYU Langone. Biyu J. He, PhD, and Eric K. Oermann, MD, are exemplifying these qualities, merging their expertise to build an artificial intelligence (AI) model that imitates the human brain.

Richard Feynman, a Nobel Prizewinning theoretical physicist, once said, What I cant make, I dont understand. The quote is one that aptly captures the spirit of the collaborative effort by Dr. He and Dr. Oermann to build an AI model that more closely mimics the human brain. They hope to use that computer algorithm as a more nimble and practical proxy for exploring the brain and plumbing the depths of its mysteries. Building a computational model can help us understand whats going on in the brain in a more quantitative, detailed way than traditional neuroscience allows us to do, says Dr. He.

To fund their research, Dr. He and Dr. Oermann have been awarded $1.2 million from the W.M. Keck Foundation, a nonprofit that supports pioneering discoveries in science, engineering, and medical research.

The researchers will start by mapping the brains of volunteers as they complete a very specific, simple taska process that neuroscientists call one-shot learning. For example, imagine seeing an abstract black-and-white drawing, and then seeing a photo of a recognizable objectsay, a helicopterthat loosely resembles it. Once your brain recognizes the helicopter in the photo, it will forever see it in the abstract image as well. Once you have that photograph in your head, its imprinted in your mind and forever alters the way you process the abstract image, explains Dr. He.

The pair will use neuroimaging and electrodes to map the brain activity involved in one-shot learning, and then leverage advanced AI techniques to create a computer model. One-shot learning is something the human brain does well, but algorithms do not, explains Dr. Oermann. We plan to use our analysis of the process to unpack the differences and make AI models more brain-like.

Dr. He and Dr. Oermann are uniquely well suited to this venture. She is a neuroscientist who researches how the brain creates conscious awareness, while he is a neurosurgeon and machine learning expert. The seeds of their collaboration began in 2019 when Dr. He received an email from John G. Golfinos, MD, chair of the Department of Neurosurgery, announcing that Dr. Oermann was a candidate for a faculty role. In addition to his training in neurosurgery, Dr. Oermann had done a postdoctoral research fellowship in machine learning at the life sciences arm of Alphabet, Googles parent company. I looked at his website and thought, It would be amazing if we recruited him, recalls Dr. He.

Likewise, Dr. Oermann was aware of Dr. Hes research before his arrival at NYU Langone. Biyu asks some of the biggest questions about how our brains make us human, he says. As an engineer and a neurosurgeon, I was used to focusing on very specific problems. But understanding how the brain works is what drew me to AI.

Once here, Dr. Oermann wasted no time in reaching out to Dr. He. We immediately sensed there was a long-term research agenda that could benefit from combining our expertise, he says. We see this as the first step in a really ambitious collaborative process.

Read more:
Can an Artificial Intelligence Model Be Built to Closely Mimic the ... - NYU Langone Health

Read More..

Vinitaly pits classic art against artificial intelligence – The Drinks Business

This years Vinitaly has shown that human emotion can still trump tech innovation in the wine world. Louis Thomasreports from the fair.

The Veronafiere has been a home away from home this week for two artworks from Florences Uffizi gallery, both depicting Roman wine god Bacchus: one by Guido Reni (c. 1620), and the other from Michelangelo Merisi da Caravaggio (c. 1598).

The inclusion of fine art at Vinitaly has not gone un-criticised, with some academics suggesting that it is unacceptable to have such works in a commercial, rather than an intellectual setting.

However, there is something praiseworthy about an event that puts cultural heritage at the centre and given the queue to view the paintings, there is clearly an appetite for art, as well as wine.

Shifting from the Baroque to something rather more futuristic, a welcome dinner held by the Comitato Grandi Cru dItalia at the Teatro Ristori was hailed by committee president Valentina Argiolas as a celebration of a renaissance after difficult years.

Renaissance was an interesting word to choose, as the evening was themed around whether artificial intelligence (AI) could end up displacing wine professionals and mark the death of wine writing and criticism as we know it.

The topic, which has become increasingly dominant in the news, was introduced by having a recording of an AI simulation compre the event.

The organisers noted that they were fortunate to have prepared the answers from Mr. AI earlier last week, before Italy became the first Western nation to ban chatbot sensation ChatGPT over privacy concerns.

A video of Monica Larner, Italy reviewer for Robert Parker Wine Advocate, was shown in which both she and Mr. AI offered advice in a duel of expertise.

While Mr. AIs answers to questions such as what was the 2022 vintage like in Italy? sounded accurate, if clearly an amalgamation of different resources, Larners, crucially, had the colour of experience.

Gabriele Gorelli MW then took to the stage to share his thoughts, remarking that while Skynet from The Terminator films is a fantasy, there is still an element of risk.

As for whether it could have been of assistance during his Master of Wine examinations,as ChatGPT recently proved to be for the Master Sommelier theory papers, Gorelli said: I would have been glad to be helped by a reliable AIBut [in the MW course] were not tested on knowing things, its more holistic: why is it happening, not what is happening.

Appearing over video call, New York-based wine critic Antonio Galloni remarked: Ready or not, AIs already here. Possibly not a shock to an audience that was by that point familiar with the unsettling robotic tones of Mr. AI.

But he then reassured the audience of wine trade and media members that there was no way AI could become a substitute for wine writers, or winemakers: AI may be brilliant if you want to make orange juice for a supermarketbut there are no shortcuts to making great wine.

Precisely how AI can taste wine, surely a requirement for winemaking, is a more complex issue.

It can predict how a wine might turn out based on weather and cellar factors, or be used in conjunction with chemical analysis (as was the case in a recent video from Konstantin Baum MW).

What AI offers is a sterile smoothie of information blended together.

Ask an AI to write about wine, and it can competently regurgitate what is already on the internet, but it cannot offer insight from lived experience.

Ask it to create an image of Bacchus in the style of Caravaggio, and, though it may be less temperamental than the artist himself, it will pale in comparison to Caravaggio every single time.

Both wine writing and art come from a context that AI cannot replicate. Simply put: it lacks that human touch.

Theres nothing to worry about, at least for now.

Read the original:
Vinitaly pits classic art against artificial intelligence - The Drinks Business

Read More..