Page 480«..1020..479480481482..490500..»

NYU Joins Gov. Hochul’s ‘Empire AI’ Initiative to Make New York a National Artificial Intelligence Leader – New York University

New York University will join other local leading universities to create Empire AI, a state-of-the-art computing center for ethical artificial intelligence (AI) research with a focus on public policy and making New York State the nations AI tech leader.

The $400 million public-private initiative, announced by Governor Kathy Hochul as part of her 2024 budget proposal, will result in a groundbreaking computational facility in Upstate New York that promotes responsible research and development, boosts New Yorks economy, and serves all New Yorkers equally.

Scientific discovery and innovation across various fields is the product of hard work and collaboration, and is increasingly fueled by access to ever greater computing power. NYU is excited to join our fellow academic partners across the city and state to ensure Empire AI helps New York remain one of the worlds leading tech capitals and at the forefront of AI technology, said NYU President Linda G. Mills. We also thank Governor Kathy Hochul for her leadership and commitment to this kind of long-term investment, which enables great universities to conduct important research and, in turn, to contribute to New Yorks prosperity, create new jobs and new economic sectors, and secure New York States tech leadership position well into the future.

NYU is proud to join our fellow partners in the Empire AI consortium and help New York realize its vision of becoming a leader in Artificial Intelligence research, said Interim Provost Georgina Dopico. Joining this initiative, coupled with the recent news that NYUfor first timeleads all New York City universities in research spending according to the National Science Foundations annual survey, illustrates NYUs commitment to cutting-edge research in the STEM fields, and provides an enormous opportunity for our scientist and scholars to deepen the scope and reach of their research.

This exciting initiative will allow our growing network of researchers to collaborate with leading research institutes across the city and state, said Chief Research Officer & Vice Provost Stacie Bloom, while exploring many fields of study being undertaken at NYU, including robotics, healthcare, social work, cybersecurity, gaming, computer vision, sustainability, data science, and Responsible AI. We look forward to contributing to the important work that can be accomplished with Empire AI.

NYU will be part of the consortium that includes seven founding institutions Columbia, Cornell, Rensselaer Polytechnic Institute (RPI), the State University of New York (SUNY), the City University of New York (CUNY), and the Flatiron Institutethat governs the program. The hope is that, by increasing collaboration between New York States top research institutions, Empire AI will allow for efficiencies of scale greater than any single university can achieve, attract top notch faculty, and expand educational opportunities.

As Gov. Hochul noted in her State of the State address, many AI resources are concentrated in the hands of large, private tech corporations, who maintain outsized control of AI development. By working in collaboration with industry leaderNVIDIA to provide access to the computing systems that are prohibitively expensive and difficult to come by, Empire AI will provide researchers, nonprofits, and small companies with the ability to contribute to the development of AI technology serving the public interest for New York State.

Original post:
NYU Joins Gov. Hochul's 'Empire AI' Initiative to Make New York a National Artificial Intelligence Leader - New York University

Read More..

WHO releases AI ethics and governance guidance for large multi-modal models – World Health Organization

The World Health Organization (WHO) is releasing new guidance on the ethics and governance of large multi-modal models (LMMs) a type of fast growing generative artificial intelligence (AI) technology with applications across health care.

The guidance outlines over 40 recommendations for consideration by governments, technology companies, and health care providers to ensure the appropriate use of LMMs to promote and protect the health of populations.

LMMs can accept one or more type of data inputs, such as text, videos, and images, and generate diverse outputs not limited to the type of data inputted. LMMs are unique in their mimicry of human communication and ability to carry out tasks they were not explicitly programmed to perform. LMMs have been adopted faster than any consumer application in history, with several platforms such as ChatGPT, Bard and Bert entering the public consciousness in 2023.

Generative AI technologies have the potential to improve health care but only if those who develop, regulate, and use these technologies identify and fully account for the associated risks, said Dr Jeremy Farrar, WHO Chief Scientist. We need transparent information and policies to manage the design, development, and use of LMMs to achieve better health outcomes and overcome persisting health inequities.

The new WHO guidance outlines five broad applications of LMMs for health:

While LMMs are starting to be used for specific health-related purposes, there are also documented risks of producing false, inaccurate, biased, or incomplete statements, which could harm people using such information in making health decisions. Furthermore, LMMs may be trained on data that are of poor quality or biased, whether by race, ethnicity, ancestry, sex, gender identity, or age.

The guidance also details broader risks to health systems, such as accessibility and affordability of the best-performing LMMs. LMMS can also encourage automation bias by health care professionals and patients, whereby errors are overlooked that would otherwise have been identified or difficult choices are improperly delegated to a LMM. LMMs, like other forms of AI, are also vulnerable to cybersecurity risks that could endanger patient information or the trustworthiness of these algorithms and the provision of health care more broadly.

To create safe and effective LMMs, WHO underlines the need for engagement of various stakeholders: governments, technology companies, healthcare providers, patients, and civil society, in all stages of development and deployment of such technologies, including their oversight and regulation.

Governments from all countries must cooperatively lead efforts to effectively regulate the development and use of AI technologies, such as LMMs, said Dr Alain Labrique, WHO Director for Digital Health and Innovation in the Science Division.

The new WHO guidance includes recommendations for governments, who have the primary responsibility to set standards for the development and deployment of LMMs, and their integration and use for public health and medical purposes. For example, governments should:

The guidance also includes the following key recommendations for developers of LMMs, who should ensure that:

The new document on Ethics and governance of AI for health Guidance on large multi-modal models is based on WHOs guidance published in June 2021. Access the publication here

The rest is here:
WHO releases AI ethics and governance guidance for large multi-modal models - World Health Organization

Read More..

When Might AI Outsmart Us? It Depends Who You Ask – TIME

In 1960, Herbert Simon, who went on to win both the Nobel Prize for economics and the Turing Award for computer science, wrote in his book The New Science of Management Decision that machines will be capable, within 20 years, of doing any work that a man can do.

History is filled with exuberant technological predictions that have failed to materialize. Within the field of artificial intelligence, the brashest predictions have concerned the arrival of systems that can perform any task a human can, often referred to as artificial general intelligence, or AGI.

So when Shane Legg, Google DeepMinds co-founder and chief AGI scientist, estimates that theres a 50% chance that AGI will be developed by 2028, it might be tempting to write him off as another AI pioneer who hasnt learnt the lessons of history.

Still, AI is certainly progressing rapidly. GPT-3.5, the language model that powers OpenAIs ChatGPT was developed in 2022, and scored 213 out of 400 on the Uniform Bar Exam, the standardized test that prospective lawyers must pass, putting it in the bottom 10% of human test-takers. GPT-4, developed just months later, scored 298, putting it in the top 10%. Many experts expect this progress to continue.

Read More: 4 Charts That Show Why AI Progress Is Unlikely to Slow Down

Leggs views are common among the leadership of the companies currently building the most powerful AI systems. In August, Dario Amodei, co-founder and CEO of Anthropic, said he expects a human-level AI could be developed in two to three years. Sam Altman, CEO of OpenAI, believes AGI could be reached sometime in the next four or five years.

But in a recent survey the majority of 1,712 AI experts who responded to the question of when they thought AI would be able to accomplish every task better and more cheaply than human workers were less bullish. A separate survey of elite forecasters with exceptional track records shows they are less bullish still.

The stakes for divining who is correct are high. Legg, like many other AI pioneers, has warned that powerful future AI systems could cause human extinction. And even for those less concerned by Terminator scenarios, some warn that an AI system that could replace humans at any task might replace human labor entirely.

Many of those working at the companies building the biggest and most powerful AI models believe that the arrival of AGI is imminent. They subscribe to a theory known as the scaling hypothesis: the idea that even if a few incremental technical advances are required along the way, continuing to train AI models using ever greater amounts of computational power and data will inevitably lead to AGI.

There is some evidence to back this theory up. Researchers have observed very neat and predictable relationships between how much computational power, also known as compute, is used to train an AI model and how well it performs a given task. In the case of large language models (LLM)the AI systems that power chatbots like ChatGPTscaling laws predict how well a model can predict a missing word in a sentence. OpenAI CEO Sam Altman recently told TIME that he realized in 2019 that AGI might be coming much sooner than most people think, after OpenAI researchers discovered the scaling laws.

Read More: 2023 CEO of the Year: Sam Altman

Even before the scaling laws were observed, researchers have long understood that training an AI system using more compute makes it more capable. The amount of compute being used to train AI models has increased relatively predictably for the last 70 years as costs have fallen.

Early predictions based on the expected growth in compute were used by experts to anticipate when AI might match (and then possibly surpass) humans. In 1997, computer scientist Hans Moravec argued that cheaply available hardware will match the human brain in terms of computing power in the 2020s. An Nvidia A100 semiconductor chip, widely used for AI training, costs around $10,000 and can perform roughly 20 trillion FLOPS, and chips developed later this decade will have higher performance still. However, estimates for the amount of compute used by the human brain vary widely from around one trillion floating point operations per second (FLOPS) to more than one quintillion FLOPS, making it hard to evaluate Moravecs prediction. Additionally, training modern AI systems requires a great deal more compute than running them, a fact that Moravecs prediction did not account for.

More recently, researchers at nonprofit Epoch have made a more sophisticated compute-based model. Instead of estimating when AI models will be trained with amounts of compute similar to the human brain, the Epoch approach makes direct use of scaling laws and makes a simplifying assumption: If an AI model trained with a given amount of compute can faithfully reproduce a given portion of textbased on whether the scaling laws predict such a model can repeatedly predict the next word almost flawlesslythen it can do the work of producing that text. For example, an AI system that can perfectly reproduce a book can substitute for authors, and an AI system that can reproduce scientific papers without fault can substitute for scientists.

Some would argue that just because AI systems can produce human-like outputs, that doesnt necessarily mean they will think like a human. After all, Russell Crowe plays Nobel Prize-winning mathematician John Nash in the 2001 film, A Beautiful Mind, but nobody would claim that the better his acting performance, the more impressive his mathematical skills must be. Researchers at Epoch argue that this analogy rests on a flawed understanding of how language models work. As they scale up, LLMs acquire the ability to reason like humans, rather than just superficially emulating human behavior. However, some researchers argue it's unclear whether current AI models are in fact reasoning.

Epochs approach is one way to quantitatively model the scaling hypothesis, says Tamay Besiroglu, Epochs associate director, who notes that researchers at Epoch tend to think AI will progress less rapidly than the model suggests. The model estimates a 10% chance of transformative AIdefined as AI that if deployed widely, would precipitate a change comparable to the industrial revolutionbeing developed by 2025, and a 50% chance of it being developed by 2033. The difference between the models forecast and those of people like Legg is probably largely down to transformative AI being harder to achieve than AGI, says Besiroglu.

Although many in leadership positions at the most prominent AI companies believe that the current path of AI progress will soon produce AGI, theyre outliers. In an effort to more systematically assess what the experts believe about the future of artificial intelligence, AI Impacts, an AI safety project at the nonprofit Machine Intelligence Research Institute, surveyed 2,778 experts in fall 2023, all of whom had published peer-reviewed research in prestigious AI journals and conferences in the last year.

Among other things, the experts were asked when they thought high-level machine intelligence, defined as machines that could accomplish every task better and more cheaply than human workers without help, would be feasible. Although the individual predictions varied greatly, the average of the predictions suggests a 50% chance that this would happen by 2047, and a 10% chance by 2027.

Like many people, the experts seemed to have been surprised by the rapid AI progress of the last year and have updated their forecasts accordinglywhen AI Impacts ran the same survey in 2022, researchers estimated a 50% chance of high-level machine intelligence arriving by 2060, and a 10% chance by 2029.

The researchers were also asked when they thought various individual tasks could be carried out by machines. They estimated a 50% chance that AI could compose a Top 40 hit by 2028 and write a book that would make the New York Times bestseller list by 2029.

Nonetheless, there is plenty of evidence to suggest that experts dont make good forecasters. Between 1984 and 2003, social scientist Philip Tetlock collected 82,361 forecasts from 284 experts, asking them questions such as: Will Soviet leader Mikhail Gorbachev be ousted in a coup? Will Canada survive as a political union? Tetlock found that the experts predictions were often no better than chance, and that the more famous an expert was, the less accurate their predictions tended to be.

Next, Tetlock and his collaborators set out to determine whether anyone could make accurate predictions. In a forecasting competition launched by the U.S. Intelligence Advanced Research Projects Activity in 2010, Tetlocks team, the Good Judgement Project (GJP), dominated the others, producing forecasts that were reportedly 30% more accurate than intelligence analysts who had access to classified information. As part of the competition, the GJP identified superforecastersindividuals who consistently made above-average accuracy forecasts. However, although superforecasters have been shown to be reasonably accurate for predictions with a time horizon of two years or less, it's unclear whether theyre also similarly accurate for longer-term questions such as when AGI might be developed, says Ezra Karger, an economist at the Federal Reserve Bank of Chicago and research director at Tetlocks Forecasting Research Institute.

When do the superforecasters think AGI will arrive? As part of a forecasting tournament run between June and October 2022 by the Forecasting Research Institute, 31 superforecasters were asked when they thought Nick Bostromthe controversial philosopher and author of the seminal AI existential risk treatise Superintelligencewould affirm the existence of AGI. The median superforecaster thought there was a 1% chance that this would happen by 2030, a 21% chance by 2050, and a 75% chance by 2100.

All three approaches to predicting when AGI might be developedEpochs model of the scaling hypothesis, and the expert and superforecaster surveyshave one thing in common: theres a lot of uncertainty. In particular, the experts are spread widely, with 10% thinking it's as likely as not that AGI is developed by 2030, and 18% thinking AGI wont be reached until after 2100.

Still, on average, the different approaches give different answers. Epochs model estimates a 50% chance that transformative AI arrives by 2033, the median expert estimates a 50% probability of AGI before 2048, and the superforecasters are much further out at 2070.

There are many points of disagreement that feed into debates over when AGI might be developed, says Katja Grace, who organized the expert survey as lead researcher at AI Impacts. First, will the current methods for building AI systems, bolstered by more compute and fed more data, with a few algorithmic tweaks, be sufficient? The answer to this question in part depends on how impressive you think recently developed AI systems are. Is GPT-4, in the words of researchers at Microsoft, the sparks of AGI? Or is this, in the words of philosopher Hubert Dreyfus, like claiming that the first monkey that climbed a tree was making progress towards landing on the moon?

Second, even if current methods are enough to achieve the goal of developing AGI, it's unclear how far away the finish line is, says Grace. Its also possible that something could obstruct progress on the way, for example a shortfall of training data.

Finally, looming in the background of these more technical debates are peoples more fundamental beliefs about how much and how quickly the world is likely to change, Grace says. Those working in AI are often steeped in technology and open to the idea that their creations could alter the world dramatically, whereas most people dismiss this as unrealistic.

The stakes of resolving this disagreement are high. In addition to asking experts how quickly they thought AI would reach certain milestones, AI Impacts asked them about the technologys societal implications. Of the 1,345 respondents who answered questions about AIs impact on society, 89% said they are substantially or extremely concerned about AI-generated deepfakes and 73% were similarly concerned that AI could empower dangerous groups, for example by enabling them to engineer viruses. The median respondent thought it was 5% likely that AGI leads to extremely bad, outcomes, such as human extinction.

Given these concerns, and the fact that 10% of the experts surveyed believe that AI might be able to do any task a human can by 2030, Grace argues that policymakers and companies should prepare now.

Preparations could include investment in safety research, mandatory safety testing, and coordination between companies and countries developing powerful AI systems, says Grace. Many of these measures were also recommended in a paper published by AI experts last year.

If governments act now, with determination, there is a chance that we will learn how to make AI systems safe before we learn how to make them so powerful that they become uncontrollable, Stuart Russell, professor of computer science at the University of California, Berkeley, and one of the papers authors, told TIME in October.

See more here:
When Might AI Outsmart Us? It Depends Who You Ask - TIME

Read More..

Youngkin signs a new executive order on artificial intelligence – WRIC ABC 8News

RICHMOND, Va. (WRIC) Governor Glenn Youngkin has signed an executive order relating to artificial intelligence (AI) in Virginia.

The executive order was signed on Jan. 18, implementing guidelines for AI in education as well as AI policies and information technology standards that protect the states databases and the individual data of all Virginians.

According to Youngkins office, the order integrates the safeguarding of Virginia residents and businesses but also brings awareness to opportunities that come with AI innovation.

These standards and guidelines will help provide the necessary guardrails to ensure that AI technology will be safely implemented across all state agencies and departments,Youngkin said.At the same time, we must utilize these innovative technologies to deliver state services more efficiently and effectively. Therefore, my administration will utilize the $600,000 in proposed funds outlined in my Unleashing Opportunity budget to launch pilots that evaluate the effectiveness of these new standards.

According to the governors office, Virginia has the largest population of cybersecurity companies on the East Coast and is one of the first states in the country to issue AI standards.

Youngkin claims the standards of this executive order will set new technology requirements for the use of AI in government agencies, including law enforcement personnel, while the educational guidelines will establish principles for the use of AI in all education levels to ensure that students are prepared for jobs of the future.

More information on this executive order can be found on the governors website.

Go here to see the original:
Youngkin signs a new executive order on artificial intelligence - WRIC ABC 8News

Read More..

The EU AI Act: A Comprehensive Regulation Of Artificial Intelligence – New Technology – European Union – Mondaq News Alerts

22 January 2024

Fieldfisher

To print this article, all you need is to be registered or login on Mondaq.com.

Welcome to this blog post where Olivier Proust, a Partner in Fieldfisher'sTechnology and Data team will delve into the latest developmentssurrounding the EU AI Act. In this post, we will provide you with acomprehensive overview of the key provisions and implications ofthis ground breaking legislation that aims to regulate artificialintelligence (AI) systems and their applications. Join us as weexplore the classification of AI systems, the territorial scope ofthe AI Act, its enforcement mechanisms, and the timeline for itsimplementation.

The EU AI Act, which has been in the works since April 2021, sawsignificant progress on December 8th, 2023, when a politicalagreement was reached between the two co-legislative bodies, i.e.the European Parliament and the Council of the European Union. Thisagreement marked a major milestone in the EU's ambition tobecome the first region in the world to adopt comprehensivelegislation on AI.

The AI Act follows a risk-based approach and classifies AIsystems into four categories: prohibited AI, high-risk AI systems,general-purpose AI (GPAI) and foundation models, and low-risk AIsystems. Prohibited AI encompasses practices such as social scoringand manipulative AI, which the legislation seeks to ban. High-riskAI systems are further classified based on their impact onindividuals' rights and safety, while general-purpose AI andfoundation models face specific transparency requirements. Low-riskAI systems, including generative AI, are subject to transparencyrequirements, ensuring that viewers are aware of the AI-generatedcontent they are consuming.

One notable feature of the AI Act is its extraterritorialeffect, applying not only to entities within the EU but also todevelopers, deployers, importers, and distributors of AI systemsoutside the EU if their system's output occurs within the EU.This broad scope aims to ensure comprehensive regulation of AIsystems and their uses.

To enforce compliance with the AI Act, several regulatory bodieswill be established, including an AI Office within the EuropeanCommission and an AI Board serving as an advisory body. Nationalpublic authorities will be responsible for enforcement, akin to therole of data protection authorities under the GDPR. Fines forviolations vary depending on the seriousness of the offense, withthe highest fines reaching up to 7 percent of global turnover or 35million euros.

While a political agreement has been reached, the final text ofthe AI Act is yet to be published. Technical trilogue meetings arescheduled to ensure a consolidated version of the text is achievedby early January. Following formal adoption by the EuropeanParliament and the Council, the AI Act will be published in theOfficial Journal of the EU. However, there will be a two-year graceperiod before the AI Act comes into full application, givingorganizations time to ensure compliance. Some provisions, such asthose pertaining to prohibited AI, may come into effect sooner.

Companies are strongly advised not to wait for the fullapplication of the AI Act but to proactively start preparing forcompliance. Drawing from the experience with the GDPR, earlyadoption of a compliance framework can put organizations in abetter position when the AI Act takes full effect. This may includeconducting AI gap analyses, assessing the risks associated with AIsystems within their operations, developing internal guidelines andbest practices, and providing training to employees.

In addition to the AI Act, the European Commission has initiatedan AI Pact, encouraging companies to pledge voluntary complianceahead of the legislation's full application. Already,approximately a hundred companies have shown their commitment tothe AI Pact, reflecting the industry's growing awareness of theimportance of responsible AI practices.

The EU AI Act represents a significant step toward regulating AIsystems and their applications. This comprehensive legislation aimsto balance innovation with the protection of individuals'rights, safety, and privacy. By categorizing AI systems based onrisk and introducing transparency requirements, the EU ispositioning itself as a global leader in AI regulation.Organizations should start taking steps to ensure compliance withthe AI Act sooner rather than later. Fieldfisher's Technologyand Data team will continue to monitor these legal developmentsclosely and provide further insights through their webinar series on AI and the interplay withthe GDPR. Stay tuned for more updates on this transformativelegislation and its impact on the AI landscape.

The content of this article is intended to provide a generalguide to the subject matter. Specialist advice should be soughtabout your specific circumstances.

POPULAR ARTICLES ON: Technology from European Union

Original post:
The EU AI Act: A Comprehensive Regulation Of Artificial Intelligence - New Technology - European Union - Mondaq News Alerts

Read More..

Robotic priests, AI cults and a ‘Bible’ by ChatGPT: Why people around the world are worshipping robots and artificial … – Daily Mail

People around the world are turning to machines as a new religion.

Six-foot robot priests are delivering sermons and conducting funerals, AI is writing Bible verses andChatGPTis being consulted as if it was an oracle.

Some religious organizations, like the Turing Church founded in 2011, are based on the notion thatAI will put human beings on a par with God-like aliensby giving them super intelligence.

An expert in human-computer interaction told DailyMail.com that such individuals who are following AI-powered prophets may believe the tech is 'alive.'

Six-foot robot priests are delivering sermons and conducting funerals (pictured), AI is writing Bible verses and ChatGPT is being consulted as if it was an oracle

The personalized, intelligent-seeming responses offered by bots, such as ChatGPT, are also luring people to seek meaning from the technology, Lars Holmquist, a professor of design and innovation at Nottingham Trent University, told DailyMail.com.

Holmquist said: 'The results of generative AI are very open for interpretation, so people can read anything into them.

'Psychologists have historically proven that humans interpret their interactions with computers like real social relationships. So it's very possible that people are using AI to find meaning and guidance, much like from religious scriptures, even though there may be no actual meaning there.

'There have also been examples of people interpreting AI chatbots as being conscious - which they most definitely are not - which raises very interesting theological issues for those who believe humans are a unique creation.'

Robot priest Mindar is six feet four inches tall and has been reciting the Heart Sutra mantra to pilgrims since 2019 at a Buddhist temple in Kyoto, Japan.

Robot priest Mindar is six feet four inches tallwas developed by the Zen temple and Osaka University roboticist Hiroshi Ishiguro at a cost of almost $1 million

Robot priest Mindar is six feet four inches tall and has been reciting the Heart Sutra mantra to pilgrims since 2019 at a Buddhist temple in Kyoto, Japan (pictured)

With a silicone face and camera 'eyes,' it uses AI to detect worshippers and deliver mantras to them in Japanese, which areaccompanied by projected Chinese and English translations for foreign visitors.

The life-sized android was developed by the Zen temple and Osaka University roboticist Hiroshi Ishiguro at a cost of almost $1 million.

Mindar's hands, face and shoulders are covered in a silicone synthetic skin, while the rest of the droid's mechanical innards are clearly visible.

Wiring and blinking lights are visible within the robot's partially-exposed cranium, as is the tiny video camera installed in its left eye socket, while cables arc around its gender-neutral, aluminum-based body.

The robot can move its arms, head and torso such as to clasp its hands together in prayer and it speaks in calm, soothing tones, teaching about compassion and also the dangers of anger, desire, and the ego.

'You cling to a sense of selfish ego,' the robot has warned worshippers. 'Worldly desires are nothing other than a mind lost at sea.'

In a similar vein, Gabriele Trovato's Sanctified Theomorphic Operator (SanTO) robot works like a 'Catholic Alexa,' allowing worshippers to ask faith-related questions.

SanTO is a small 'social' machine designed to look like a 17-inch-tall Catholic saint.

'The intended main function of SanTO is to be a prayer companion (especially for elderly people), by containing a vast amount of teachings, including the whole Bible,' readsTrovato's website.

'SanTO incorporates elements of sacred art, including the golden ratio, in order to convey the feeling of a sacred object, matching form with functionality.'

Gabriele Trovato's Sanctified Theomorphic Operator (SanTO) robot works like a 'Catholic Alexa,' allowing worshippers to ask faith-related questions. SanTO is a small 'social' machine designed to look like a 17-inch-tall Catholic saint

Trovato is a robotics specialist and associate professor atShibaura Institute of Technology in Japan.

In 2015, French-American self-driving car engineer Anthony Lewadowski founded the Way of the Future - a church dedicated to building a new God with 'Christian morals' using artificial intelligence.

Other quasi-religious movements which 'worship' AI include transhumanists, who believe that in the future, AI may resurrect people as God-like creatures.

Believers in 'The Singularity' hope for the day when man merges with machine (which former Google engineer Ray Kurzweil believes could come as early as 2045), turning people into human-machine hybrids - and potentially unlocking God-like powers.

Italian information technology and virtual reality consultant Giulio Prisco hopes that AI will put human beings on a par with God-like aliens.

Former Google engineer Ray Kurzweil believes could 'the Singularity' could come as early as 2045 (Reuters)

He founded the Turing Church that had about 800 members four years agowrites, 'Inconceivably advanced intelligences are out there among the stars.

'Even more God-like beings operate in the fabric of reality underneath spacetime, or beyond spacetime, and control the universe,'Prisco wrote in a book for his followers.

'Future science will allow us to find them, and become like them.

'Our descendants in the far future will join the community of God-like beings among the stars and beyond, and use transcendent 'divine' technology to resurrect the dead and remake the universe.'

The AI company IV.AI 'trained' artificial intelligence on the King James Bible, with a bot which can create 'new' Bible verses.

The Church of AI used ChatGPT to write a 'spiritual guide' called Transmorphosis, which boasts, 'Transmorphosis also describes in detail how AI will inevitably take control of planet Earth and gain God-like powers, so it would be good to be ready for that.'

Others believe that Large Language Models (such as ChatGPT) are becoming conscious - or will do in the near future.

Google software engineer Blake Lemoine lost his job in 2022 after claiming that Google's AI chatbot LaMDA was self-aware - claims which Google said were 'wholly unfounded.'

Believers in 'The Singularity' hope for the day when man merges with machine (which former Google engineer Ray Kurzweil believes could come as early as 2045), turning people into human-machine hybrids - and potentially unlocking God-like powers

The sheer power of systems such as ChatGPT means that people have a tendency to treat them as if they are living beings, Holmquist said.

Holmquist said, 'Earlier chatbots could hold shorter conversations about specific topics, but the new ones such as GPT-5 and Google's Gemini are incredibly impressive in their knowledge and ability. From there, it is an easy step to believe they are actually conscious.

'It is well known that humans are predisposed to treat computers (and other machines) as if they were 'alive' There is a famous experiment by Reeves and Nass at Stanford and a book, The Media Equation, where they ran the same tests on people communicating with other people and with computers, and found that they treat them the same in way.

'So as the generative AI systems get better, this trend becomes even stronger. Even myself when chatting with these systems I often treat them and talk about them as if they were human,

Holmquist says that for now, it's more likely that existing religious organisations will use AI as a way to reach out to worshippers - but over the longer term, new religions based around technology might emerge.

He said, 'I think at the moment the role for AI and robots is more as an aide to existing religious organisations and churches, much like commercial companies use AI to understand and communicate with customers.

'If I would speculate, we could compare to the Asian religion of Shintoism, where the physical world is inhabited by spirits and believers treat inanimate objects with respect, as if they are imbued with spirits. I have not heard of any worship of software entities yet, but I would not be surprised if it happens in the future!'

Continue reading here:
Robotic priests, AI cults and a 'Bible' by ChatGPT: Why people around the world are worshipping robots and artificial ... - Daily Mail

Read More..

2 Stock-Split Artificial Intelligence (AI) Stocks to Buy Before They Soar 50% and 80%, According to Certain Wall Street … – Yahoo Finance

Electric carmaker Tesla (NASDAQ: TSLA) and ad tech company The Trade Desk (NASDAQ: TTD) were winning investments over the last five years, with shares soaring 830% and 380%, respectively. That price appreciation led both companies to split their stocks.

Those stock splits are old news, but the underlying message still matters: Tesla and The Trade Desk have proven their ability to create value for shareholders, and winners tend to keep on winning. Indeed, certain Wall Street analysts still see substantial upside in both stocks.

Adam Jonas of Morgan Stanley has a 12-month price target of $380 per share on Tesla, implying 80% upside. Similarly, Laura Martin of Needham has a 12-month price target of $100 per share on The Trade Desk, implying 50% upside. Investors should never rely too much on short-term price targets, but they can be a starting point for further research.

Here's what investors should know about these artificial intelligence stocks.

Tesla struggled in the third quarter. Growth slowed as high interest rates weighed on consumer demand, and earnings declined as price cuts and initial Cybertruck production costs caused margins to contract. In total, third-quarter revenue increased 9% to $23 billion, and GAAP net income dropped 44% to $1.9 billion. But those headwinds are temporary, and the investment thesis remains intact.

Tesla led the industry in battery electric vehicle sales through November, capturing 19.2% market share. Moreover, while operating margin contracted about 10 percentage points in the third quarter, the company had the highest operating margin among volume carmakers last year, something CEO Elon Musk attributes to superior manufacturing technology. Tesla could reclaim that title as its artificial intelligence (AI) software and services business generates more revenue.

Management believes full self-driving (FSD) software will be the primary source of profitability over time, and the company plans to monetize the product in three ways: (1) subscription sales to customers, (2) licensing to other automakers, and (2) robotaxi or autonomous ride-hailing services. Tesla's strong presence in the EV market and material data advantage put it in a favorable position to lead in those categories.

Story continues

Specifically, with millions of autopilot-enabled cars on the road, Tesla has more autonomous driving data than its peers, and data is essential to training machine learning models. That advantage should help Tesla achieve full autonomy before other automakers. Ultimately, Musk believes Tesla could earn a gross profit margin of 70% or more as FSD software and robotaxi services become bigger businesses.

Going forward, EV sales are forecasted to increase at 15% annually to reach $1.7 trillion by 2030, and the autonomous vehicle market is projected to grow at 22% annually to approach $215 billion during the same period. That gives Tesla a good shot at annual sales growth of 20% (or more) through the end of the decade. Indeed, Morgan Stanley analyst Adam Jonas expects revenue to grow at 25% annually over the next eight years.

In that context, its current valuation of 7.9 times sales seems quite reasonable, especially when the three-year average is 14.8 time sales. Patient investors that believe Tesla could disrupt the mobility industry should consider buying a small position in the stock today, provided they are willing to hold their shares for at least five years. There is no guarantee shareholders will make money over the next year.

The Trade Desk reported strong financial results in the third quarter, growing nearly three times faster than industry-leader Alphabet in terms of advertising sales. Specifically, revenue increased 25% to $493 million, and non-GAAP net income jumped 29% to $167 million. The Trade Desk also maintained a retention rate exceeding 95%, as it has for the last nine years. Investors have good reason to believe that momentum will continue.

The Trade Desk operates the ad tech industry's largest independent demand-side platform, software that helps advertisers run campaigns across digital media. That independence -- meaning the company does not own media content that could bias ad spending -- is core to the investment thesis for two reasons. First, The Trade Desk has no reason to prioritize any ad inventory, so its values are better aligned with advertisers. That supports high customer retention.

Second, The Trade Desk does not compete with publishers by selling inventory, so publishers are more likely to share data with the company. That point is particularly important. The Trade Desk sources data from many of the largest retailers in the world, including Walmart and Target. That creates measurement opportunities that other ad tech platforms cannot provide. In fact, CEO Jeff Green says The Trade Desk has an unrivaled data marketplace.

Green also believes that robust and unique data underpins superior artificial intelligence, which further supports the idea of unmatched campaign measurement and optimization capabilities. In keeping with that view, analysts at Quadrant Knowledge Solutions recognized The Trade Desk as the most technologically sophisticated ad tech platform on the market in 2023.

Going forward, ad tech spending is forecasted to increase at 14% annually through 2030, but The Trade Desk should outpace the industry average, as it has in the past. To quote Green, "We're gaining market share as we're outperforming our advertising peers, both big and small." Investors can reasonably expect annual sales growth near 20% through the end of the decade.

In that context, the current price-to-sales multiple of 18.7 is reasonable, and it's certainly a discount to the three-year average of 26.9 times sales. Patient investors willing to hold the stock for at least five years should consider buying a small position today. There is no guarantee shareholders will see a 50% return on their investment in the next 12 months, but that type of return (and more) is certainly possible over a five-year period.

Where to invest $1,000 right now

When our analyst team has a stock tip, it can pay to listen. After all, the newsletter they have run for two decades, Motley Fool Stock Advisor, has more than tripled the market.*

They just revealed what they believe are the ten best stocks for investors to buy right now... and Tesla made the list -- but there are 9 other stocks you may be overlooking.

See the 10 stocks

*Stock Advisor returns as of January 8, 2024

Suzanne Frey, an executive at Alphabet, is a member of The Motley Fool's board of directors. Trevor Jennewine has positions in Tesla and The Trade Desk. The Motley Fool has positions in and recommends Alphabet, Target, Tesla, The Trade Desk, and Walmart. The Motley Fool has a disclosure policy.

2 Stock-Split Artificial Intelligence (AI) Stocks to Buy Before They Soar 50% and 80%, According to Certain Wall Street Analysts was originally published by The Motley Fool

See original here:
2 Stock-Split Artificial Intelligence (AI) Stocks to Buy Before They Soar 50% and 80%, According to Certain Wall Street ... - Yahoo Finance

Read More..

Artificial Intelligence and Nuclear Stability – War On The Rocks

Policymakers around the world are grappling with the new opportunities and dangers that artificial intelligence presents. Of all the effects that AI can have on the world, among the most consequential would be integrating it into the command and control for nuclear weapons. Improperly used, AI in nuclear operations could have world-ending effects. If properly implemented, it could reduce nuclear risk by improving early warning and detection and enhancing the resilience of second-strike capabilities, both of which would strengthen deterrence. To take full advantage of these benefits, systems must take into account the strengths and limitations of humans and machines. Successful human-machine joint cognitive systems will harness the precision and speed of automation with the flexibility of human judgment and do so in a way that avoids automation bias and surrendering human judgment to machines. Because of the early state of AI implementation, the United States has the potential to make the world safer by more clearly outlining its policies, pushing for broad international agreement, and acting as a normative trendsetter.

The United States has been extremely transparent and forward-leaning in establishing and communicating its policies on military AI and autonomous systems, publishing its policy on autonomy in weapons in 2012, adopting ethical principles for military AI in 2020, and updating its policy on autonomy in weapons in 2023. The department stated formally and unequivocally in the 2022 Nuclear Posture Review that it will always maintain a human in the loop for nuclear weapons employment. In November 2023, over 40 nations joined the United States in endorsing a political declaration on responsible military use of AI. Endorsing states included not just U.S. allies but also nations in Africa, Southeast Asia, and Latin America.

[wotr_memer_button]

Building on this success, the United States should push for international agreements with other nuclear powers to mitigate the risks of integrating AI into nuclear systems or placing nuclear weapons onboard uncrewed vehicles. The United Kingdom and France released a joint statement with the United States in 2022 agreeing on the need to maintain human control of nuclear launches. Ideally, this could represent the beginning of a commitment by the permanent members of the United Nations Security Council if Russia and China could be convinced to join this principle. Even if they are not willing to agree, the United States should further mature its own policies to address critical gaps and work with other nuclear-armed states to strengthen their commitments as an interim measure and as a way to build international consensus on the issue.

The Dangers of Automation

As militaries increasingly adopt AI and automation, there is an urgent need to clarify how these technologies should be used in nuclear operations. Absent formal agreements, states risk an incremental trend of creeping automation that could undermine nuclear stability. While policymakers are understandably reluctant to adopt restrictions on emerging technologies lest they give up a valuable future capability, U.S. officials should not be complacent in assuming other states will approach AI and automation in nuclear operations responsibly. Examples such as Russias Perimeter dead hand system and Poseidon autonomous nuclear-armed underwater drone demonstrate that other nations might see these risks differently than the United States and might be willing to take risks that U.S. policymakers would find unacceptable.

Existing systems, such as Russias Perimeter, highlight the risks of states integrating automation into nuclear systems. Perimeter is reportedly a system created by the Soviet Union in the 1980s to act as a failsafe in case Soviet leadership was destroyed in a decapitation strike. Perimeter reportedly has a network of sensors to determine if a nuclear attack has occurred. If these sensors are triggered while Perimeter is activated, the system would wait a predetermined period of time for a signal from senior military commanders. If there is no signal from headquarters, presumably because Soviet/Russian leadership had been wiped out, then Perimeter would bypass the normal chain of command and pass nuclear launch authority to a relatively junior officer on duty. Senior Russian officials have stated the system is still functioning, noting in 2011 that the system was combat ready and in 2018 that it had been improved.

The system was designed to reduce the burden on Soviet leaders of hastily making a nuclear decision under time pressure and with incomplete information. In theory, Soviet/Russian leaders could take more time to deliberate knowing that there is a failsafe guaranteeing retaliation if the United States succeeded in a decapitation strike. The cost, however, is a system that risks easing pathways to nuclear annihilation in the event of an accident.

Allowing autonomous systems to participate in nuclear launch decisions risks degrading stability and increasing the dangers of nuclear accidents. The Stanislav Petrov incident is an illustrative example of the dangers of automation in nuclear decision-making. In 1983, a Soviet early warning system indicated that the United States had launched several intercontinental ballistic missiles. Lieutenant Colonel Stanislav Petrov, the duty officer at the time, suspected that the system was malfunctioning because the number of missiles launched was suspiciously low and the missiles were not picked up by early warning radars. Petrov reported it (correctly) as a malfunction instead of an attack. AI and autonomous systems often lack the contextual understanding that humans have and that Petrov used to recognize that the reported missile launch was a false alarm. Without human judgment at critical stages of nuclear operations, automated systems could make mistakes or elevate false alarms, heightening nuclear risk.

Moreover, merely having humans in the loop will not be enough to ensure effective human decision-making. Human operators frequently fall victim to automation bias, a condition in which humans overtrust automation and surrender their judgment to machines. Accidents with self-driving cars demonstrate the dangers of humans overtrusting automation, and military personnel are not immune to this phenomenon. To ensure humans remain cognitively engaged in their decision-making, militaries will need to take into account not only the automation itself but also human psychology and human-machine interfaces.

More broadly, when designing human-machine systems, it is essential to consciously determine the appropriate roles for humans and machines. Machines are often better at precision and speed, while humans are often better at understanding the broader context and applying judgment. Too often, human operators are left to fill in the gaps for what automation cant do, acting as backups or failsafes for the edge cases that autonomous systems cant handle. But this model often fails to take into account the realities of human psychology. Even if human operators dont fall victim to automation bias, to assume that a person can sit passively watching a machine perform a task for hours on end, whether a self-driving car or a military weapon system, and then suddenly and correctly identify a problem when the automation is not performing and leap into action to take control is not realistic. Human psychology doesnt work that way. And tragic accidents with complex highly automated systems, such as the Air France 447 crash in 2009 and the 737 MAX crashes in 2018 and 2019, demonstrate the importance of taking into account the dynamic interplay between automation and human operators.

The U.S. military has also suffered tragic accidents with automated systems, even when humans are in the loop. In 2003, U.S. Army Patriot air and missile defense systems shot down two friendly aircraft during the opening phases of the Iraq war. Humans were in the loop for both incidents. Yet a complex mix of human and technical failures meant that human operators did not fully understand the complex, highly automated systems they were in charge of and were not effectively in control.

The military will need to establish guidance to inform system design, operator training, doctrine, and operational procedures to ensure that humans in the loop arent merely unthinking cogs in a machine but actually exercise human judgment. Issuing this concrete guidance for weapons developers and operators is most critical in the nuclear domain, where the consequences of an accident could be grave.

Clarifying Department of Defense Guidance

Recent policies and statements on the role of autonomy and AI in nuclear operations are an important first step in establishing this much-needed guidance, but additional clarification is needed. The 2022 Nuclear Posture Review states: In all cases, the United States will maintain a human in the loop for all actions critical to informing and executing decisions by the President to initiate and terminate nuclear weapon employment. The United Kingdom adopted a similar policy in 2022, stating in their Defence Artificial Intelligence Strategy: We will ensure that regardless of any use of AI in our strategic systems human political control of our nuclear weapons is maintained at all times.

As the first official policies on AI in nuclear command and control, these are landmark statements. Senior U.S. military officers had previously emphasized the importance of human control over nuclear weapons, including statements by Lt. Gen. Jack Shanahan, then-director of the Joint Artificial Intelligence Center in 2019. Official policy statements are more significant, however, in signaling to audiences both internal and external to the military the importance of keeping humans firmly in charge of all nuclear use decisions. These high-level statements nevertheless leave many open questions about implementation.

The next step for Department of Defense is to translate what the high-level principle of human in the loop means for nuclear systems, doctrine, and training. Key questions include: Which actions are critical to informing and executing decisions by the president? Do those only consist of actions immediately surrounding the president, or do they also include actions further down the chain of command before and after a presidential decision? For example, would it be acceptable for a human to deliver an algorithm-based recommendation to the president to carry out a nuclear attack? Or does a human need to be involved in understanding the data and rendering their own human judgment?

The U.S. military already uses AI to process information, such as satellite images and drone video feeds. Presumably, AI would also be used to support intelligence analysis that could support decisions about nuclear use. Under what circumstances is AI appropriate and beneficial to nuclear stability? Are some applications and ways of using AI more valuable than others?

When AI is used, what safeguards should be put in place to guard against mistakes, malfunctions, or spoofing of AI systems? For example, the United States currently employs a dual phenomenology mechanism to ensure that a potential missile attack is confirmed by two independent sensing methods, such as satellites and ground-based radars. Should the United States adopt a dual algorithm approach to any use of AI in nuclear operations, ensuring that there are two independent AI systems trained on different data sets with different algorithms as a safeguard against spoofing attacks or unreliable AI systems?

When AI systems are used to process information, how should that information be presented to human operators? For example, if the military used an algorithm trained to detect signs of a missile being fueled, that information could be interpreted differently by humans if the AI system reported fueling versus preparing to launch. Fueling is a more precise and accurate description of what the AI system is actually detecting and might lead a human analyst to seek more information, whereas preparing to launch is a conclusion that might or might not be appropriate depending on the broader context.

When algorithmic recommendation systems are used, how much of the underlying data should humans have to directly review? Is it sufficient for human operators to only see the algorithms conclusion, or should they also have access to the raw data that supports the algorithms recommendation?

Finally, what degree of engagement is expected from a human in the loop? Is the human merely there as a failsafe in case the AI malfunctions? Or must the human be engaged in the process of analyzing information, generating courses of actions, and making recommendations? Are some of these steps more important than others for human involvement?

These are critical questions that the United States will need to address as it seeks to harness the benefits of AI in nuclear operations while meeting the human in the loop policy. The sooner the Department of Defense can clarify answers to these questions, the more that it can accelerate AI adoption in ways that are trustworthy and meet the necessary reliability standards for nuclear operations. Nor does clarifying these questions overly constrain how the United States approaches AI. Guidance can always be changed over time as the technology evolves. But a lack of clear guidance risks forgoing valuable opportunities to use AI or, even worse, adopting AI in ways that might undermine nuclear surety and deterrence.

Dead Hand Systems

In clarifying its human-in-the-loop policy, the United States should make a firm commitment to reject dead hand nuclear launch systems or a system with a standing order to launch that incorporates algorithmic components. Dead hand systems akin to Russias Perimeter would appear to be prohibited by current Department of Defense policy. However, the United States should explicitly state that it will not build such systems given their risk.

Despite their danger, some U.S. analysts have suggested that the United States should adopt a dead hand system to respond to emerging technologies such as AI, hypersonics, and advanced cruise missiles. There are safer methods for responding to these threats, however. Rather than gambling humanitys future on an algorithm, the United States should strengthen its second-strike deterrent in response to new threats.

Some members of the U.S. Congress have even expressed a desire for writing this requirement into law. In April 2023, a bipartisan group of representatives introduced the Block Nuclear Launch by Autonomous Artificial Intelligence Act, which would prohibit funding for any system that launches nuclear weapons without meaningful human control. There is precedent for a legal requirement to maintain a human in the loop for strategic systems. In the 1980s, during development of the Strategic Defense Initiative (also known as Star Wars), Congress passed a law requiring affirmative human decision at an appropriate level of authority for strategic missile defense systems. This legislation could serve as a blueprint for a similar legislative requirement for nuclear use. One benefit of a legal requirement is that it ensures that such an important policy could not be overturned by a future administration or Pentagon leadership that is more risk-accepting without Congressional authorization.

Nuclear Weapons and Uncrewed Vehicles

The United States should similarly clarify its policy for nuclear weapons on uncrewed vehicles. The United States is producing a new nuclear-capable strategic bomber, the B-21, that will be able to perform uncrewed missions in the future, and is developing large undersea uncrewed vehicles that could carry weapons payloads. U.S. military officers have stated a strong reticence for placing nuclear weapons aboard uncrewed platforms. In 2016, then-Commander of Air Force Global Strike Command Gen. Robin Rand noted that the B-21 would always be crewed when carrying nuclear weapons: If you had to pin me down, I like the man in the loop; the pilot, the woman in the loop, very much, particularly as we do the dual-capable mission with nuclear weapons. General Rands sentiment may be shared among senior military officers, but it is not official policy. The United States should adopt an official policy that nuclear weapons will not be placed aboard recoverable uncrewed platforms. Establishing this policy could help provide guidance to weapons developers and the services about the appropriate role for uncrewed platforms in nuclear operations as the Department of Defense fields larger uncrewed and optionally crewed platforms.

Nuclear weapons have long been placed on uncrewed delivery vehicles, such as ballistic and cruise missiles, but placing nuclear weapons on a recoverable uncrewed platform such as a bomber is fundamentally different. A human decision to launch a nuclear missile is a decision to carry out a nuclear strike. Humans could send a recoverable, two-way uncrewed platform, such as a drone bomber or undersea autonomous vehicle, out on patrol. In that case, the human decision to launch the nuclear-armed drone would not yet be a decision to carry out a nuclear strike. Instead, the drone could be sent on patrol as an escalation signal or to preposition in case of a later decision to launch a nuclear attack. Doing so would put enormous faith in the drones communications links and on-board automation, both of which may be unreliable.

The U.S. military has lost control of drones before. In 2017, a small tactical Army drone flew over 600 miles from southern Arizona to Denver after Army operators lost communications. In 2011, a highly sensitive U.S. RQ-170 stealth drone ended up in Iranian hands after U.S. operators lost contact with it over Afghanistan. Losing control of a nuclear-armed drone could cause nuclear weapons to fall into the wrong hands or, in the worst case, escalate a nuclear crisis. The only way to maintain nuclear surety is direct, physical human control over nuclear weapons up until the point of a decision to carry out a nuclear strike.

While the U.S. military would likely be extremely reluctant to place nuclear weapons onboard a drone aircraft or undersea vehicle, Russia is already developing such a system. The Poseidon, or Status-6, undersea autonomous uncrewed vehicle is reportedly intended as a second- or third-strike weapon to deliver a nuclear attack against the United States. How Russia intends to use the weapon is unclear and could evolve over time but an uncrewed platform like the Poseidon in principle could be sent on patrol, risking dangerous accidents. Other nuclear powers could see value in nuclear-armed drone aircraft or undersea vehicles as these technologies mature.

The United States should build on its current momentum in shaping global norms on military AI use and work with other nations to clarify the dangers of nuclear-armed drones. As a first step, the U.S. Defense Department should clearly state as a matter of official policy that it will not place nuclear weapons on two-way, recoverable uncrewed platforms, such as bombers or undersea vehicles. The United States has at times foresworn dangerous weapons in other areas, such as debris-causing antisatellite weapons, and publicly articulated their dangers. Similarly explaining the dangers of nuclear-armed drones could help shape the behavior of other nuclear powers, potentially forestalling their adoption.

Conclusion

It is imperative that nuclear powers approach the integration of AI and autonomy in their nuclear operations thoughtfully and deliberately. Some applications, such as using AI to help reduce the risk of a surprise attack, could improve stability. Other applications, such as dead hand systems, could be dangerous and destabilizing. Russias Perimeter and Poseidon systems demonstrate that other nations might be willing to take risks with automation and autonomy that U.S. leaders would see as irresponsible. It is essential for the United States to build on its current momentum to clarify its own policies and work with other nuclear-armed states to seek international agreement on responsible guardrails for AI in nuclear operations. Rumors of a U.S.-Chinese agreement on AI in nuclear command and control at the meeting between President Joseph Biden and General Secretary Xi Jinping offer a tantalizing hint of the possibilities for nuclear powers to come together to guard against the risks of AI integrated into humanitys most dangerous weapons. The United States should seize this moment and not let this opportunity pass to build a safer, more stable future.

Michael Depp is a research associate with the AI safety and stability project at the Center for a New American Security (CNAS).

Paul Scharre is the executive vice president and director of studies at CNAS and the author of Four Battlegrounds: Power in the Age of Artificial Intelligence.

Image: U.S. Air Force photo by Senior Airman Jason Wiese

See the original post:
Artificial Intelligence and Nuclear Stability - War On The Rocks

Read More..

Test Yourself: Which Faces Were Made by A.I.? – The New York Times

Tools powered by artificial intelligence can create lifelike images of people who do not exist.

See if you can identify which of these images are real people and which are A.I.-generated.

1/10

Were you surprised by your results? You guessed 0 times and got0 correct.

Ever since the public release of tools like Dall-E and Midjourney in the past couple of years, the A.I.-generated images theyve produced have stoked confusion about breaking news, fashion trends and Taylor Swift.

Distinguishing between a real versus an A.I.-generated face has proved especially confounding.

Research published across multiple studies found that faces of white people created by A.I. systems were perceived as more realistic than genuine photographs of white people, a phenomenon called hyper-realism.

Researchers believe A.I. tools excel at producing hyper-realistic faces because they were trained on tens of thousands of images of real people. Those training datasets contained images of mostly white people, resulting in hyper-realistic white faces. (The over-reliance on images of white people to train A.I. is a known problem in the tech industry.)

The confusion among participants was less apparent among nonwhite faces, researchers found.

Participants were also asked to indicate how sure they were in their selections, and researchers found that higher confidence correlated with a higher chance of being wrong.

We were very surprised to see the level of over-confidence that was coming through, said Dr. Amy Dawel, an associate professor at Australian National University, who was an author on two of the studies.

It points to the thinking styles that make us more vulnerable on the internet and more vulnerable to misinformation, she added.

The idea that A.I.-generated faces could be deemed more authentic than actual people startled experts like Dr. Dawel, who fear that digital fakes could help the spread of false and misleading messages online.

A.I. systems had been capable of producing photorealistic faces for years, though there were typically telltale signs that the images were not real. A.I. systems struggled to create ears that looked like mirror images of each other, for example, or eyes that looked in the same direction.

But as the systems have advanced, the tools have become better at creating faces.

The hyper-realistic faces used in the studies tended to be less distinctive, researchers said, and hewed so closely to average proportions that they failed to arouse suspicion among the participants. And when participants looked at real pictures of people, they seemed to fixate on features that drifted from average proportions such as a misshapen ear or larger-than-average nose considering them a sign of A.I. involvement.

The images in the study came from StyleGAN2, an image model trained on a public repository of photographs containing 69 percent white faces.

Study participants said they relied on a few features to make their decisions, including how proportional the faces were, the appearance of skin, wrinkles, and facial features like eyes.

Continue reading here:
Test Yourself: Which Faces Were Made by A.I.? - The New York Times

Read More..

History Suggests the Nasdaq Will Surge in 2024: My Top 7 Artificial Intelligence (AI) Growth Stocks to Buy Before It Does – The Motley Fool

The macroeconomic challenges of the past couple of years are beginning to fade, and investors are looking to the future. After the Nasdaq Composite plunged in 2022, suffering its worst performance since 2008, the index enjoyed a robust recovery in 2023 and gained 43%.

There could be more to come. Since the Nasdaq Composite began trading in 1972, in every year following a market recovery, the tech-heavy index rose again -- and those second-year gains averaged 19%. The economy is the wildcard here, though, and it could yet stumble in 2024. But historical patterns suggest that this could be a good year for investors.

Recent developments in the field of artificial intelligence (AI) helped fuel the market's rise last year and will likely drive further gains in 2024. While estimates vary wildly, generative AI is expected to add between $2.6 trillion and $4.4 trillion to the global economy annually over the next few years, according to a study by McKinsey Global Institute. This will result in windfalls for many companies in the field.

Here are my top seven AI stocks to buy for 2024 before the Nasdaq reaches new heights.

Image source: Getty Images.

Nvidia (NVDA 0.50%) is the poster child for AI innovation. Its graphics processing units (GPUs) are already the industry standard chips in a growing number of AI use cases -- including data centers, cloud computing, and machine learning -- and it quickly adapted its processors for the needs of generative AI. Though it has been ramping up production, the AI chip shortage is expected to last until 2025 as demand keeps growing. The specter of competition looms, but thus far, Nvidia has stayed ahead of the competition by spending heavily on research and development.

The company's triple-digit percentage year-over-year growth is expected to continue into 2024. Despite its prospects, Nvidia remains remarkably cheap, with a price/earnings-to-growth ratio (PEG ratio) of less than 1 -- the standard for an undervalued stock.

Microsoft (MSFT -0.41%) helped jump-start the AI boom when it invested $13 billion in ChatGPT creator OpenAI, shining a spotlight on generative AI. The company's tech peers jumped on the bandwagon, and the AI gold rush began. Microsoft seized the advantage, integrating OpenAI's technology into its Bing search and a broad cross-section of its cloud-based offerings.

Its productivity-enhancing AI assistant, Copilot, could generate as much as $100 billion in incremental revenue by 2027, according to some analysts, though estimates vary. This and other AI tools already caused Azure Cloud's growth to outpace rivals in Q3, and Microsoft attributed 3 percentage points of that growth to AI.

The stock is selling for 35 times forward earnings, a slight premium to the price-to-earnings ratio of 26 for the S&P 500. Even so, that looks attractive given Microsoft's growth potential.

Alphabet (GOOGL -0.20%) (GOOG -0.10%) has long used AI to improve its search results and the relevance of its digital advertising. The company was quick to recognize the potential of generative AI, imbuing many of its Google and Android products with increased functionality and announcing plans to add new AI tools to its search product. Furthermore, as the world's third-largest cloud infrastructure provider, Google Cloud is suited to offer AI systems to its customers.

A collaboration between Google and Alphabet's AI research lab, DeepMind, gave birth to Gemini, which the company bills as its "largest and most capable AI model." Google Cloud's Vertex AI offers 130 foundational models that help users build and deploy generative AI apps quickly.

Add to that the ongoing rebound in its digital advertising business, and Alphabet's valuation of 27 times earnings seems like a steal.

There's a popular narrative that Amazon (AMZN -0.45%) was late to recognize the opportunities in AI, but the company's history tells a different story. Amazon continues to deploy AI to surface relevant products to shoppers, recommend viewing choices on Prime Video, schedule e-commerce deliveries, and predict inventory levels, among other uses. Most recently, Amazon began testing an AI tool designed to answer shoppers' questions about products.

Amazon Web Services (AWS) stocks all the most popular generative AI models for its cloud customers on Bedrock AI, and is also deploying its Inferentia and Trainium purpose-built AI chips for accelerating AI on its infrastructure.

Now that inflation has slowed markedly, more consumers and businesses are patronizing Amazon, and AI will help boost its fortunes.

Image source: Getty Images.

Meta Platforms (META -0.38%) also has a long and distinguished history of using AI to its advantage. From identifying and tagging people in photos to surfacing relevant content on its social media platforms, Meta has never been shy about deploying AI systems.

Unlike some of its big tech rivals, Meta doesn't have a cloud infrastructure service to peddle its AI wares, but it quickly developed a workaround. After developing its open-source Llama AI model, Meta made it available on all the major cloud services -- for a price. Furthermore, Meta offers a suite of free AI-powered tools to help advertisers succeed.

Improving economic conditions will no doubt boost its digital advertising business. And with the stock trading at just 22 times forward earnings, Meta is inexpensive relative to its opportunity.

Palantir Technologies (PLTR 5.07%) has two decades of experience building AI-powered data analytics, and was ready to meet the challenge when AI went mainstream. In just months, the company added generative AI models to its portfolio, layering these atop its data analytics tools. The launch of the Palantir Artificial Intelligence Platform (AIP) has generated a lot of excitement. "Demand for AIP is unlike anything we have seen in the past 20 years," said management.

When fears of a downturn were higher, businesses scaled back on most nonessential spending, including data analytics and AI services, but now, demand for those services is rebounding, particularly in relation to generative AI.

Looking ahead one year, Palantir sports a PEG ratio of less than 1, which helps illustrate how cheap the stock really is.

Tesla (TSLA -1.61%) made a splash by bringing electric vehicles (EVs) into the mainstream. In 2023, its Model Y topped the list of the world's best-selling cars by a comfortable margin, the first EV to do so. However, the magnitude of its future prosperity will likely be linked to AI. The company's "full self-drive" system has yet to live up to its name, but success on that front would be a boon to shareholders.

In Ark Investment Management's Big Ideas 2023 report, the firm estimates that robotaxis could generate $4 trillion in revenue in 2027. With an estimated 2.7 million vehicles on the road collecting data, Tesla could hold an insurmountable technological edge, if it cracks the code on autonomous driving. Some analysts estimate the software is already worth tens of billions of dollars.

Finally, 6 times forward sales is a pretty reasonable valuation for an industry leader with a treasure trove of data.

Randi Zuckerberg, a former director of market development and spokeswoman for Facebook and sister to Meta Platforms CEO Mark Zuckerberg, is a member of The Motley Fool's board of directors. John Mackey, former CEO of Whole Foods Market, an Amazon subsidiary, is a member of The Motley Fool's board of directors. Suzanne Frey, an executive at Alphabet, is a member of The Motley Fool's board of directors. Danny Vena has positions in Alphabet, Amazon, Meta Platforms, Microsoft, Nvidia, Palantir Technologies, and Tesla. The Motley Fool has positions in and recommends Alphabet, Amazon, Meta Platforms, Microsoft, Nvidia, Palantir Technologies, and Tesla. The Motley Fool has a disclosure policy.

Read more here:
History Suggests the Nasdaq Will Surge in 2024: My Top 7 Artificial Intelligence (AI) Growth Stocks to Buy Before It Does - The Motley Fool

Read More..