Page 1,171«..1020..1,1701,1711,1721,173..1,1801,190..»

Neuroscience, Artificial Intelligence, and Our Fears: A Journey of … – Neuroscience News

Summary: As artificial intelligence (AI) evolves, its intersection with neuroscience stirs both anticipation and apprehension. Fears related to AI loss of control, privacy, and human value stem from our neural responses to unfamiliar and potentially threatening situations.

We explore how neuroscience helps us understand these fears and suggests ways to address them responsibly. This involves dispelling misconceptions about AI consciousness, establishing ethical frameworks for data privacy, and promoting AI as a collaborator rather than a competitor.

Key Facts:

Source: Neuroscience News

Fear of the unknown is a universal human experience. With the rapid advancements in artificial intelligence (AI), our understanding and perceptions of this technologys potential and its threats are evolving.

The intersection of neuroscience and AI raises both excitement and fear, feeding our imagination with dystopian narratives about sentient machines or providing us hope for a future of enhanced human cognition and medical breakthroughs.

Here, we explore the reasons behind these fears, grounded in our understanding of neuroscience, and propose paths toward constructive dialogue and responsible AI development.

The Neuroscience of Fear

Fear, at its core, is a primal emotion rooted in our survival mechanism. It serves to protect us from potential harm, creating a heightened state of alertness.

The amygdala, a small almond-shaped region deep within the brain, is instrumental in our fear response. It processes emotional information, especially related to threats, and triggers fear responses by communicating with other brain regions.

Our understanding of AI, a complex and novel concept, creates uncertainty, a key element that can trigger fear.

AI and Neuroscience: A Dialectical Relationship

AIs development and its integration into our lives is a significant change, prompting valid fears. The uncanny similarity between AI and human cognition can induce fear, partly due to the human brains tendency to anthropomorphize non-human entities.

This cognitive bias, deeply ingrained in our neural networks, can make us perceive AI as a potential competitor or threat.

Furthermore, recent progress in AI development has been fueled by insights from neuroscience. Machine learning algorithms, particularly artificial neural networks, are loosely inspired by the human brains structure and function.

This bidirectional relationship between AI and neuroscience, where neuroscience inspires AI design and AI, in turn, offers computational models to understand brain processes, has led to fears about AI achieving consciousness or surpassing human intelligence

The Fear of AI

The fear of AI often boils down to the fear of loss loss of control, loss of privacy, and loss of human value. The perception of AI as a sentient being out of human control is terrifying, a fear perpetuated by popular media and science fiction.

Moreover, AI systems capabilities for data analysis, coupled with their lack of transparency, raise valid fears about privacy and surveillance.

Another fear is the loss of human value due to AI outperforming humans in various tasks. The impact of AI on employment and societal structure has been a significant source of concern, considering recent advancements in robotics and automation).

The fear that AI might eventually replace humans in most areas of life challenges our sense of purpose and identity.

Addressing Fears and Building Responsible AI

While these fears are valid, it is crucial to remember that AI is a tool created by humans and for humans. AI does not possess consciousness or emotions; it only mimics cognitive processes based on its programming and available data. This understanding is vital in dispelling fears of a sentient AI.

Addressing privacy concerns requires establishing robust legal and ethical frameworks for data handling and algorithmic transparency.

Furthermore, interdisciplinary dialogue between neuroscientists, AI researchers, ethicists, and policymakers is crucial in navigating the societal impacts of AI and minimizing its risks.

Emphasizing the concept of human-in-the-loop AI, where AI assists rather than replaces humans, can alleviate fears of human obsolescence. Instead of viewing AI as a competitor, we can view it as a collaborator augmenting human capabilities.

The fear of AI, deeply rooted in our neural mechanisms, reflects our uncertainties about this rapidly evolving technology. However, understanding these fears and proactively addressing them is crucial for responsible AI development and integration.

By fostering constructive dialogue, establishing ethical guidelines, and promoting the vision of AI as a collaborator, we can mitigate these fears and harness AIs potential responsibly and effectively.

Author: Neuroscience News CommunicationsSource: Neuroscience NewsContact: Neuroscience News Communications Neuroscience NewsImage: The image is credited to Neuroscience News

Citations:

Patiency is not a virtue: the design of intelligent systems and systems of ethics by Joanna J. Bryson. Ethics and Information Technology

Hopes and fears for intelligent machines in fiction and reality by Stephen Cave et al. Nature Machine Intelligence

What AI can and cant do (yet) for your business by Chui, M et al. McKinsey Quarterly

What is consciousness, and could machines have it? by Dehaene, S et al. Science

On seeing human: a three-factor theory of anthropomorphism by Epley, N et al. Psychological Review

Neuroscience-inspired artificial intelligence by Hassabis, D et al. Neuron

Feelings: What are they & how does the brain make them? by Joseph E. LeDoux. Daedalus

Evidence that neural information flow is reversed between object perception and object reconstruction from memory by Juan Linde-Domingo et al. Nature Communications

On the origin of synthetic life: attribution of output to a particular algorithm by Roman V Yampolskiy. Physica Scripta

Read more from the original source:

Neuroscience, Artificial Intelligence, and Our Fears: A Journey of ... - Neuroscience News

Read More..

How AI could transform the legal industry for the better – Marketplace

Were at that point in the development of artificial intelligence where everything feels simultaneously amazing and terrifying.

On the amazing side, GPT-4, the large language model that powers ChatGPT, took the bar exam earlier this year. While lawyers in their 50s still have nightmares about passing the bar, it was a breeze for generative AI. GPT-4 scored in the 90th percentile, the top 10% of humans.

And on the terrifying side: The lawyer who filed a legal brief written by ChatGPT in federal court that ended up being full of fake cases and citations that ChatGPT made up.

For all the doom and gloom about AI, Michael Semanchik has a pretty compelling use case for the upside.

Figure how we can take the 16.5-year average of a wrongfully convicted person sitting in prison and bring that something closer to maybe four years, Semanchik said.

Semanchik is managing attorney at the California Innocence Project, a nonprofit that has freed 40 wrongfully incarcerated people, some serving life sentences.

We have six lawyers, he said. A couple of our lawyers are actually sharing offices. And we are on a shoestring budget.

Semanchik said his office gets about 800 requests for help every year, along with more than 4,000 pieces of mail his interns have to sort through. Paralegals are costly.

Which is why when a company asked him if he wanted to pilot a new AI legal assistant, he agreed to test it out.

Dear AI, read through this case file, several hundred pages of police reports and transcripts, Semanchik said. Tell me, is this victim witness consistent in their identification of my client?

He said a human attorney would take about a month to do all that.

The AI?Ten minutes, and it came back with accurate results.

Semanchik said a lot of the reason it takes so long to exonerate his clients isnt necessarily the appeals process, its this type of grunt work.

Were getting to the point where lawyers have a duty to use AI, Semanchik said.

There are already a bevy of legal AI products specifically tailored for law firms. Legal databases like LexisNexis and Westlaw have AI-branded offerings.

Jake Heller is CEO of Casetext. His company makes CoCounsel, the AI that the California Innocence Project is using. It costs $500 per user per month for for-profit firms.

With all these stories swirling around ChatGPT hallucinations, a lot of Hellers job is reassuring interested attorneys that his AI wont make stuff up.

We ground everything it says in real documents, real databases, real information. So there arent chances for it to hallucinate or make up answers to questions, Heller said.

Legal AI tools can summarize thousands of emails or texts for document review, recommend questions for depositions and write technical legal research memos.

All of that should translate to fewer billable hours and cheaper legal services emphasis on should.

The objective of particularly the larger law firms is to keep the money train going, said Sharon Nelson, an attorney and technology consultant for law firms. And so how do you do that? Theyll figure out how.

Nelson started her career in the 1970s and said shes seen parts of this movie before.

You know, when we got computerized, everybody said it would reduce bills, Nelson said. Didnt work. Did not work. And Im pretty sure this one isnt going to work either.

But when Microsoft Word came on the scene, it couldnt write a letter to opposing counsel for you or obey commands like make the letter more aggressive.

Evan Shenkman, chief innovation officer for Fisher Phillips, a national employment law firm that was an early adopter of legal AI, said lawyers and paralegals have so far given the AI rave reviews. They mostly use it as a starting point for legal research, and then the humans take over.

[The AI is] convinced some folks were Luddites, Shenkman said. They told me, Listen, I dont do my own legal research anymore. Now I use CoCounsel and then I give the output to the associate or paralegal and say, Check into that issue further.

No one has accused Shenkman yet of trying to replace them with a robot. And theres just some very human lawyer things that CoCounsel isnt great at. Like coming up with AI lawyer jokes.

Shenkman didnt think it could come up with one. But sure enough, after a false start

Why did the AI lawyer refuse to take on a new case? Because it didnt compute.

Probably best to leave the lawyer jokes to ComedyGPT, whenever that comes out.

Theres a lot happening in the world. Through it all, Marketplace is here for you.

You rely on Marketplace to break down the worlds events and tell you how it affects you in a fact-based, approachable way. We rely on your financial support to keep making that possible.

Your donation today powers the independent journalism that you rely on. For just $5/month, you can help sustain Marketplace so we can keep reporting on the things that matter to you.

Continue reading here:

How AI could transform the legal industry for the better - Marketplace

Read More..

HIMSSCast: When AI is involved in decision making, how does man … – Healthcare IT News

A lot of people today are having troubles trusting artificial intelligence not to become sentient and take over the world ala "The Terminator." For the time being, in healthcare, one of the big questions for clinicians is: Can I trust AI to help me make decisions about patients?

Today's podcast seeks to provide some answers with a prominent member of the health IT community, Dr. Blackford Middleton.

Middleton is an independent consultant, currently working on AI issues at the University of Michigan department of learning health systems. Previously he served as chief medical information officer at Stanford Health Care, CIO at Vanderbilt University Medical Center, corporate director of clinical informatics at Partners HealthCare System, assistant professor at Harvard Medical School, and chairman of the board at HIMSS.

Like what you hear? Subscribe to the podcast on Apple Podcasts, Spotify or Google Play!

Talking points:

One of the biggest considerations the industry faces with AI is trust.

How can executives at healthcare provider organizations convince clinicians and others to trust an AI system they want to implement?

What must vendors of AI systems for healthcare do to foster trust?

It's extremely important to ensure all parties involved are comfortable with collaboration between man and machine for decision making.

How do healthcare organizations foster such comfort?

What must provider organization health IT leaders know about patient-facing AI tools?

What do the next five years look like with AI in healthcare? What must CIOs and other leaders brace for?

More about this episode:

Where AI is making a difference in healthcare now

UNC Health's CIO talks generative AI work with Epic and Microsoft

Penn Medicine uses AI chatbot 'Penny' to improve cancer care

Healthcare must set guardrails around AI for transparency and safety

How ChatGPT can boost patient engagement and communication

Follow Bill's HIT coverage on LinkedIn: Bill SiwickiEmail him:bsiwicki@himss.orgHealthcare IT News is a HIMSS Media publication.

Read the original post:

HIMSSCast: When AI is involved in decision making, how does man ... - Healthcare IT News

Read More..

Amazon Wants to Teach Its Cloud Customers About AI, and It’s Yet … – The Motley Fool

Amazon (AMZN -0.63%) has dropped out of the spotlight this year as its big tech peers, Microsoftand Google parent Alphabet, fight an intense battle over artificial intelligence (AI).

Microsoft recently acquired a large stake in OpenAI, and it has integrated the ChatGPT chatbot into its Bing search engine and Azure cloud platform. Google has fired back with a chatbot of its own, called Bard.

But investors shouldn't ignore Amazon as a major player in this emerging industry, because Amazon Web Services (AWS) is the world's largest cloud platform, and the cloud is where most AI applications are developed and deployed.

Now, AWS plans to open a new program to support businesses in crafting their AI strategies, and it could be a major growth driver for the company going forward. Here's why it's time for investors to buy in.

Image source: Getty Images.

Artificial intelligence comes in many forms, and it's often used to ingest mountains of data in order to make predictions about future events. But generative AI is the version many consumers have become familiar with this year, and is capable of generating new content whether it's text, sound, images, or videos. Platforms like ChatGPT and Bard fall into that category.

OpenAI CEO Sam Altman says he's already seeing many businesses double their productivity using generative AI, and it has the potential to eventually deliver an increase of 30 times. It's because the technology can be prompted to instantly write computer code, or even generate creative works. It can also rapidly analyze thousands of pages of information and deliver answers to complex questions, which saves the user from scrolling through search engine results.

On Thursday, June 23, Amazon announced it was launching the AWS Generative AI Innovation Center with $100 million in funding. The program will connect businesses with AWS strategists, data scientists, engineers, and solution architects to help them design generative AI strategies and deploy the technology effectively and responsibly.

The program will provide no-cost workshops, engagements, and training to teach businesses how to use some of the most powerful AI tools available on AWS, like CodeWhisperer, which serves as a copilot for computer programmers to help them write software significantly faster.

Amazon says a handful of companies were already signed up, including customer engagement platform Twilio, which is using generative AI to help businesses provide deeper value to the people they serve.

AWS is one of the oldest data center customers of Nvidia, which currently produces the most powerful chips in the industry designed for AI workloads. The two companies recently signed a new deal to power AWS' new EC2 P5 infrastructure, which will allow its cloud customers to scale from 10,000 Nvidia H100 GPUs to 20,000, giving them access to supercomputer-like performance.

Overall, this could enable them to train larger AI language models than ever before, with far more precision and speed.

Here's my point: The more customers using that data center infrastructure, the more money AWS makes. Therefore, Amazon's $100 million investment in the Generative AI Innovation Center could result in multitudes of that amount coming back as revenue, as more businesses learn how to train and deploy AI. Also, the free training and access to expert engineers could make AWS an attractive on-ramp into the world of AI for many organizations, prompting them to shun other providers like Microsoft Azure and Google Cloud initially.

Estimates about the future value of the AI industry are wide-ranging, but staggering even at the low end. Research firm McKinsey & Company thinks the technology will add $13 trillion to global economic output by 2030, whereas Cathie Wood's Ark Investment Management places that figure at a whopping $200 trillion.

Therefore, it's no surprise tech giants are jostling for AI leadership, but AWS will approach the opportunity from a position of strength since it already sits atop the cloud industry.

Amazon is a very diverse business beyond the cloud; it also dominates e-commerce globally, has a fast growing digital advertising segment, and is a leader in streaming through its Prime and Twitch platforms. The company has generated $524 billion in revenue over the last four quarters, which is far more than Microsoft and Alphabet, yet Amazon stock trades at a cheaper price-to-sales (P/S) ratio than both of them.

Company

2022 Revenue (Billions)

Price-to-Sales Ratio

Amazon

$524

2.5

Microsoft

$208

12.0

Alphabet (Google)

$284

5.5

Source: Company filings.

As a result, investors can buy Amazon stock now at a very attractive valuation relative to its peers, which theoretically means it could deliver more upside in the long run. In fact, I think Amazon is on its way to a $5 trillion valuation within the next decade, and AI could supercharge its progress to reach that target even more quickly.

John Mackey, former CEO of Whole Foods Market, an Amazon subsidiary, is a member of The Motley Fools board of directors. Suzanne Frey, an executive at Alphabet, is a member of The Motley Fools board of directors. Anthony Di Pizio has no position in any of the stocks mentioned. The Motley Fool has positions in and recommends Alphabet, Amazon.com, Microsoft, Nvidia, and Twilio. The Motley Fool has a disclosure policy.

Read more:

Amazon Wants to Teach Its Cloud Customers About AI, and It's Yet ... - The Motley Fool

Read More..

US to launch working group on generative AI, address its risks – Reuters.com

WASHINGTON, June 22 (Reuters) - A U.S. agency will launch a public working group on generative artificial intelligence (AI) to help address the new technology's opportunities while developing guidance to confront its risks, the Commerce Department said on Thursday.

The National Institute of Standards and Technology (NIST), a nonregulatory agency that is part of the Commerce Department, said the working group will draw on technical expert volunteers from the private and public sectors.

"This new group is especially timely considering the unprecedented speed, scale and potential impact of generative AI and its potential to revolutionize many industries and society more broadly," NIST Director Laurie Locascio said.

Regulators globally have been scrambling to draw up rules governing the use of generative AI, which can create text and images, and whose impact has been compared to that of the internet.

President Joe Biden said this week he believes the risks of artificial intelligence to national security and the economy need to be addressed, and that he would seek expert advice.

Reporting by Rami Ayyub; editing by Jonathan Oatis

Our Standards: The Thomson Reuters Trust Principles.

See the original post:

US to launch working group on generative AI, address its risks - Reuters.com

Read More..

What is ‘ethical AI’ and how can companies achieve it? – The Ohio State University News

In the absence of legal guidelines, companies need to establish internal processes for responsible use of AI. Oscar Wong/Moment via Getty Images

The rush to deploy powerful new generative AI technologies, such as ChatGPT, has raised alarms about potential harm and misuse. The laws glacial response to such threats has prompted demands that the companies developing these technologies implement AI ethically.

But what, exactly, does that mean?

The straightforward answer would be to align a businesss operations with one or more of the dozens of sets of AI ethics principles that governments, multistakeholder groups and academics have produced. But that is easier said than done.

We and our colleagues spent two years interviewing and surveying AI ethics professionals across a range of sectors to try to understand how they sought to achieve ethical AI and what they might be missing. We learned that pursuing AI ethics on the ground is less about mapping ethical principles onto corporate actions than it is about implementing management structures and processes that enable an organization to spot and mitigate threats.

This is likely to be disappointing news for organizations looking for unambiguous guidance that avoids gray areas, and for consumers hoping for clear and protective standards. But it points to a better understanding of how companies can pursue ethical AI.

Our study, which is the basis for a forthcoming book, centered on those responsible for managing AI ethics issues at major companies that use AI. From late 2017 to early 2019, we interviewed 23 such managers. Their titles ranged from privacy officer and privacy counsel to one that was new at the time but increasingly common today: data ethics officer. Our conversations with these AI ethics managers produced four main takeaways.

First, along with its many benefits, business use of AI poses substantial risks, and the companies know it. AI ethics managers expressed concerns about privacy, manipulation, bias, opacity, inequality and labor displacement. In one well-known example, Amazon developed an AI tool to sort rsums and trained it to find candidates similar to those it had hired in the past. Male dominance in the tech industry meant that most of Amazons employees were men. The tool accordingly learned to reject female candidates. Unable to fix the problem, Amazon ultimately had to scrap the project.

Generative AI raises additional worries about misinformation and hate speech at large scale and misappropriation of intellectual property.

Second, companies that pursue ethical AI do so largely for strategic reasons. They want to sustain trust among customers, business partners and employees. And they want to preempt, or prepare for, emerging regulations. The Facebook-Cambridge Analytica scandal, in which Cambridge Analytica used Facebook user data, shared without consent, to infer the users psychological types and target them with manipulative political ads, showed that the unethical use of advanced analytics can eviscerate a companys reputation or even, as in the case of Cambridge Analytica itself, bring it down. The companies we spoke to wanted instead to be viewed as responsible stewards of peoples data.

The challenge that AI ethics managers faced was figuring out how best to achieve ethical AI. They looked first to AI ethics principles, particularly those rooted in bioethics or human rights principles, but found them insufficient. It was not just that there are many competing sets of principles. It was that justice, fairness, beneficence, autonomy and other such principles are contested and subject to interpretation and can conflict with one another.

This led to our third takeaway: Managers needed more than high-level AI principles to decide what to do in specific situations. One AI ethics manager described trying to translate human rights principles into a set of questions that developers could ask themselves to produce more ethical AI software systems. We stopped after 34 pages of questions, the manager said.

Fourth, professionals grappling with ethical uncertainties turned to organizational structures and procedures to arrive at judgments about what to do. Some of these were clearly inadequate. But others, while still largely in development, were more helpful, such as:

The key idea that emerged from our study is this: Companies seeking to use AI ethically should not expect to discover a simple set of principles that delivers correct answers from an all-knowing, Gods-eye perspective. Instead, they should focus on the very human task of trying to make responsible decisions in a world of finite understanding and changing circumstances, even if some decisions end up being imperfect.

In the absence of explicit legal requirements, companies, like individuals, can only do their best to make themselves aware of how AI affects people and the environment and to stay abreast of public concerns and the latest research and expert ideas. They can also seek input from a large and diverse set of stakeholders and seriously engage with high-level ethical principles.

This simple idea changes the conversation in important ways. It encourages AI ethics professionals to focus their energies less on identifying and applying AI principles though they remain part of the story and more on adopting decision-making structures and processes to ensure that they consider the impacts, viewpoints and public expectations that should inform their business decisions.

Ultimately, we believe laws and regulations will need to provide substantive benchmarks for organizations to aim for. But the structures and processes of responsible decision-making are a place to start and should, over time, help to build the knowledge needed to craft protective and workable substantive legal standards.

Indeed, the emerging law and policy of AI focuses on process. New York City passed a law requiring companies to audit their AI systems for harmful bias before using these systems to make hiring decisions. Members of Congress have introduced bills that would require businesses to conduct algorithmic impact assessments before using AI for lending, employment, insurance and other such consequential decisions. These laws emphasize processes that address in advance AIs many threats.

Some of the developers of generative AI have taken a very different approach. Sam Altman, the CEO of OpenAI, initially explained that, in releasing ChatGPT to the public, the company sought to give the chatbot enough exposure to the real world that you find some of the misuse cases you wouldnt have thought of so that you can build better tools. To us, that is not responsible AI. It is treating human beings as guinea pigs in a risky experiment.

Altmans call at a May 2023 Senate hearing for government regulation of AI shows greater awareness of the problem. But we believe he goes too far in shifting to government the responsibilities that the developers of generative AI must also bear. Maintaining public trust, and avoiding harm to society, will require companies more fully to face up to their responsibilities.

Dennis Hirsch, Professor of Law and Computer Science; Director, Program on Data and Governance; core faculty TDAI, The Ohio State University and Piers Norris Turner, Associate Professor of Philosophy & PPE Coordinator; Director, Center for Ethics and Human Values, The Ohio State University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Read more:

What is 'ethical AI' and how can companies achieve it? - The Ohio State University News

Read More..

AINsight: Now Everywhere, Can AI Improve Aviation Safety? – Aviation International News

Artificial intelligence (AI) applications have created a buzz on the internet and with investors, and have the potential to transform the aviation industry. From flight data analytics to optimized route and fuel planning applications, AI, in its infancy, is making an impact on aviationat least operationally. But can it improve safety?

Natural language AI chatbots, such as ChatGPT, according to technology publication Digital Trends, continue to dazzle the internet with AI-generated content, morphing from a novel chatbot into a piece of technology that is driving the next era of innovation. In a mixed outlook, one article states, No tech product in recent memory has sparked as much interest, controversy, fear, and excitement.

First launched as a prototype in November 2022, ChatGPT (Openai.com) quickly grew to more than 100 million users by January 2023. Last month, traffic grew by more than 54 percent and is closing in on one billion unique users every month.

ChatGPT is a chatbot built on what is called a Large Language Model (LLM). According to Digital Trends, These neural networks are trained on huge quantities of information from the internet for deep learningmeaning they generate altogether new responses, rather than regurgitating specific canned responses.

While most adults in the U.S. have heard about ChatGPT, only 14 percent have used it. To learn more about what all the hype is about, I asked ChatGPT a few aviation-related questions. Its a bit tongue-in-cheek, but this was an exercise to satisfy my curiosity and to see if the responses were either accurate or innovative.

For fun, I submitted a prompt to ChatGPT with the following question: How can we improve aviation safety?

In a matter of seconds, the bot generated a tidy response with an opening statement and 10 key safety measures. It acknowledged that Aviation safety is a critical concern, and there are several ways to improve it.

Of those 10 safety measures, there were seven categories that included: enhanced training and education (pilots, maintenance personnel, and air traffic controllers), strengthening of safety regulations, implementation of advanced technology (including AI), investment in research and development (including ATC and NAS modernization), improved data sharing, comprehensive safety audits, and the fostering of a safety culture that includes improved reporting and communications.

A brief description of each safety measure was provided. Most relied on overused jargon or buzzwords such as safety culture, data sharing, and best practices. (For the college student experimenting with generative AI, here is a pro tip. Provide a little context to each term.)

In general, a lot of these are white cape safety measures that are easier to talk about than to implement. As an example, improved safety regulations often fall victim to powerful lobbying groups in Washington, D.C. A great example of a regulation that was influenced by lobbyists are the more proactive science-based Part 117 duty and rest rules that applied to all Part 121 operators, except cargo airlines.

Under implementation of advanced technologies, there was some serious self-promotion of AI by stating, Utilize automation and artificial intelligence to enhance decision-making processes, reduce human error, and provide real-time safety information.

Agreed, these are the areas where AI will shine. Recently, the Notam Alliancea team of notam end-users, pilots, dispatchers, airlines, and other operatorsran an exercise to create a super Notam that helps solve issues with the readability and useability of notams. The group used ChatGPT to see if notams could be understood by a machine; the results were promising, during this demonstration, AI could understand a notam over 98 percent of the time. (More on this in an upcoming AIN article.)

So, can AI improve aviation safety? The short answer is yes, but along the way, there will be appropriate applications of AI and it will continue to create interest, controversy, fear, and excitement. According to the ChatGPT response, Its important to note that aviation safety is an ongoing process that requires continuous improvement, vigilance, and collaboration among all stakeholders in the aviation industry.

Its also important to note that AI is not the end of humanity. For those humans with critical thinking skills, and the ability to use prior experiences to perform complex tasks (pilots, safety professionals, and writers), the future is also bright.

The opinions expressed in this column are those of the author and are not necessarily endorsed byAINMedia Group.

Read this article:

AINsight: Now Everywhere, Can AI Improve Aviation Safety? - Aviation International News

Read More..

YouTube integrates AI-powered dubbing tool – TechCrunch

Image Credits: Olly Curtis/Future / Getty Images

YouTube is currently testing a new tool that will help creators automatically dub their videos into other languages using AI, the company announced Thursday at VidCon. YouTube teamed up with AI-powered dubbing service Aloud, which is part of Googles in-house incubator Area 120.

Earlier this year, YouTube introduced support for multi-language audio tracks, which allows creators to add dubbing to their new and existing videos, letting them reach a wider international audience. As of June 2023, creators have dubbed more than 10,000 videos in over 70 languages, the company told TechCrunch.

Previously, creators had to partner directly with third-party dubbing providers to create their audio tracks, which can be a time-consuming and expensive process. Aloud lets them dub videos at no additional cost.

Google first introduced Aloud in 2022. The AI-powered dubbing product transcribes a video for the creator, then translates and produces a dubbed version. Creators can review and edit the transcription before Aloud generates the dub.

YouTube is testing the tool with hundreds of creators, YouTubes VP of Creator Products, Amjad Hanif, said to the crowd yesterday. Soon the company will open the tool to all creators. Aloud is currently available in English, Spanish and Portuguese. However, there will be more languages offered in the future, such as Hindi and Bahasa Indonesian, among others.

Hanif added that YouTube is working to make translated audio tracks sound like the creators voice, with more expression and lip sync. YouTube confirmed to TechCrunch that, in the future, generative AI would allow Aloud to launch features like voice preservation, better emotion transfer and lip reanimation.

View post:

YouTube integrates AI-powered dubbing tool - TechCrunch

Read More..

Why C3.ai Stock Crashed by 10% on Friday – The Motley Fool

What happened

Though C3.ai's (AI -10.82%) leadership team made multiple optimistic pronouncements during an investor day update Thursday evening, shares of the artificial intelligence company were trading down by 10.6% as of 12:45 p.m. ET Friday.

Citing "broad interest in AI" across industries, CFO Juho Parkinnen said the company's pipeline of new contracts in the works has "basically doubled" since the beginning of its fiscal year. At the same time, management said the "sales cycle is shortening" on these opportunities, with prospects turning into contracts at a faster rate, reports TheFly.com.

So far, so good, right? But if this is the case, then why did C3.ai stock slump Friday?

According to the CFO, at this time last fiscal year, C3.ai had successfully landed 10 deals with prospective clients, whereas so far this fiscal year, the company has landed 16 deals. Problem is, as investment bank JP Morgan points out, most of these deals are for pilot projects, and not full-fledged, long-term contracts. The bank wants to see how many of these pilot projects turn into longer contracts so it can determine how accurate "the assumptions around the consumption-based pricing model" are.

Similarly, investment bank DA Davidson wrote Friday that it's pretty sure C3.ai's success in landing pilot projects is already reflected in the stock's price -- which, after all, has roughly tripled since the start of this year.

DA Davidson isn't coming right out and saying that it thinks C3.ai stock is overpriced, mind you. However, it reiterated the $30 price target that it put on the stock early this month. Given that C3.ai is trading at around $33, that's kind of the same thing, and suggests a downgrade may be imminent -- though it did maintain its neutral rating.

Given that the company has no profits and trades at more than 15 times sales, there's a lot of hype built into C3.ai's valuation right now. Discretion may be the better part of valor on this one, folks. Until C3.ai proves that it can turn its pilot deals into long-term contracts, and its long-term contracts into sustainable profits, invest in it at your own risk.

JPMorgan Chase is an advertising partner of The Ascent, a Motley Fool company. Rich Smith has no position in any of the stocks mentioned. The Motley Fool has positions in and recommends JPMorgan Chase. The Motley Fool recommends C3.ai. The Motley Fool has a disclosure policy.

Here is the original post:

Why C3.ai Stock Crashed by 10% on Friday - The Motley Fool

Read More..

70% of Companies Will Use AI by 2030 — These 2 Stocks Have a … – The Motley Fool

We all benefit from artificial intelligence (AI) right now, even if we're not fully aware of it. For instance, if you found this article through an internet search engine, there's a high probability AI recommended it to you.

AI is also used by entertainment platforms like Netflix to recommend content that users are most likely to enjoy, to keep them engaged longer. And whenever you send money to another person or business online, AI is working in the background to detect potential fraud.

Those are just a few examples of the current uses of AI. According to one estimate, the technology will soon be everywhere.

Image source: Getty Images.

According to research firm McKinsey, 70% of organizations will be using AI in some capacity by 2030. Thanks to the technology's ability to boost productivity, the firm projects it will add a whopping $13 trillion to global economic output by then.

For investors, the important part here is to note that early adoption will help separate the winners from the losers. McKinsey predicts companies that integrate AI right now -- and continue developing it until 2030 -- will see a 122% increase in their free cash flow by that time. On the other hand, businesses that adopt the technology closer to the end of the decade might only see a 10% boost, and those that don't use AI at all could see a 23% decrease in free cash flow!

Why the disparity? McKinsey analysts think the benefits of AI won't be linear, but will accelerate over time instead. Therefore, early adopters could experience exponential growth in their financial results, whereas those late to the party will be stuck playing catch-up.

Here are two companies getting a head start on their competitors. Investors might want to buy shares in them right now.

Palo Alto Networks (PANW -2.09%) recently claimed it's the largest AI-based cybersecurity company, and it might be right. It certainly has one of the largest product portfolios in the entire industry, and given that its stock currently trades at an all-time high, its market valuation of $73 billion dwarfs all of its competitors, too.

The company has three areas of specialization -- cloud security, network security, and security operations -- and it's working to integrate AI across them all. Data is king when it comes to training AI, so cybersecurity companies with a large customer base fending off attacks in their ecosystems tend to have the most potential to produce accurate models.

Palo Alto management says the company's network security tools analyze 750 million data points each day and detect 1.5 million unique attacks that have never been seen before through that process. Overall, its AI models block a whopping 8.6 billion attacks on behalf of customers every single day.

Using AI-powered tools to fight cyber threats is increasingly important because as SentinelOnerecently noted, malicious actors have started using AI as well to launch sophisticated attacks.

Thanks in part to its leadership in AI, Palo Alto is the cybersecurity provider of choice for large organizations. In the recent fiscal 2023 third quarter (ended April 30), the company saw deal volume soar for its top-spending customers. Bookings among customers spending at least $10 million per year jumped 136% year over year, making it the company's fastest-growing cohort. Bookings from customers spending at least $5 million rose a more modest 62% year over year.

Palo Alto's pipeline of work continues to expand as well. While its revenue increased 24% year over year to $1.7 billion in the third quarter, its remaining performance obligations (RPOs) rose by 35%, topping an all-time high of $9.2 billion. Considering RPOs are expected to convert to revenue over time, the result bodes well.

Despite the all-time price high in the stock, Wall Street is still incredibly bullish. Of the 42 analysts who follow the stock and are tracked by The Wall Street Journal, not a single one recommends selling, and 76% of them have given the stock the highest-possible buy rating. Investors might do well to follow their lead.

Duolingo (DUOL -2.08%) is the world's largest digital language education platform, with more than 500 million downloads. The company takes learning out of the classroom and drops it into the user's pocket, aiming to create a fun, engaging, and interactive experience in the process.

Behind the scenes, Duolingo incorporated incrementally improved versions of AI into its products for 10 years, a process that accelerated recently thanks to a partnership with ChatGPT creator OpenAI.

The chatbot powers two revolutionary features on the Duolingo platform designed to speed up the learning process. The first is called Roleplay, which enables users to converse with an AI-generated partner to improve the user's speaking skills. The second is Explain My Answer, which uses AI to offer personalized advice to users based on their mistakes in each lesson.

OpenAI's new GPT-4 technology is having an even more profound impact on Duolingo. It's helping the company's developers write new lessons at a lightning-quick pace thanks to its ability to form sentences in several different languages. That gives Duolingo's employees more time to focus on building new experiences, rather than writing monotonous, repetitive lesson content.

These are exactly the sort of productivity gains that underpin McKinsey's estimate that AI will add trillions of dollars to the global economy in the long run.

Duolingo is monetizing at a growing rate. The platform is free to use, but 4.8 million of its 72.6 million monthly active users were unlocking additional features by paying a subscription fee during the first quarter of 2023. That was an all-time high, and it drove the company's revenue to $115.7 million in the quarter, up 43% year over year and well above its prior guidance. As a result, Duolingo raised its full-year revenue forecast to $509 million from $498 million previously.

And things might get even better in the future, because Roleplay and Explain My Answer were only recently released, and users need to buy the new Duolingo Max subscription tier to have access to them. It's more expensive than the traditional Super Duolingo tier, which could result in more revenue for the company.

Overall, when it comes to language education, Duolingo is solidifying its position at the top of the industry by taking the lead on AI to help users learn more effectively. That's likely to be a tailwind for its stock price over the long term.

More:

70% of Companies Will Use AI by 2030 -- These 2 Stocks Have a ... - The Motley Fool

Read More..