Page 644«..1020..643644645646..650660..»

What the OpenAI drama means for AI progress and safety – Nature.com

OpenAI fired its charismatic chief executive, Sam Altman, on 17 November but has now reinstated him.Credit: Justin Sullivan/Getty

OpenAI the company behind the blockbuster artificial intelligence (AI) bot ChatGPT has been consumed by frenzied changes for almost a week. On 17 November, the company fired its charismatic chief executive, Sam Altman. Five days, and much drama, later, OpenAI announced that Altman would return with an overhaul of the companys board.

The debacle has thrown the spotlight on an ongoing debate about how commercial competition is shaping the development of AI systems, and how quickly AI can be deployed ethically and safely.

The push to retain dominance is leading to toxic competition. Its a race to the bottom, says Sarah Myers West, managing director of the AI Now Institute, a policy-research organization based in New York City.

Altman, a successful investor and entrepreneur, was a co-founder of OpenAI and its public face. He had been chief executive since 2019, and oversaw an investment of some US$13 billion from Microsoft. After Altmans initial ousting, Microsoft, which uses OpenAI technology to power its search engine Bing, offered Altman a job leading a new advanced AI research team. Altmans return to OpenAI came after hundreds of company employees signed a letter threatening to follow Altman to Microsoft unless he was reinstated.

The OpenAI board that ousted Altman last week did not give detailed reasons for the decision, saying at first that he was fired because he was not consistently candid in his communications with the board and later adding that the decision had nothing to do with malfeasance or anything related to our financial, business, safety or security/privacy practice.

But some speculate that the firing might have its origins in a reported schism at OpenAI between those focused on commercial growth and those uncomfortable with the strain of rapid development and its possible impacts on the companys mission to ensure that artificial general intelligence benefits all of humanity.

OpenAI, which is based in San Francisco, California, was founded in 2015 as a non-profit organization. In 2019, it shifted to an unusual capped-profit model, with a board explicitly not accountable to shareholders or investors, including Microsoft. In the background of Altmans firing is very clearly a conflict between the non-profit and the capped-profit; a conflict of culture and aims, says Jathan Sadowski, a social scientist of technology at Monash University in Melbourne, Australia.

Ilya Sutskever, OpenAIs chief scientist and a member of the board that ousted Altman, this July shifted his focus to superalignment, a four-year project attempting to ensure that future superintelligences work for the good of humanity.

Its unclear whether Altman and Sutskever are at odds about speed of development: after the board fired Altman, Sutskever expressed regret about the impacts of his actions and was among the employees who signed the letter threatening to leave unless Altman returned.

With Altman back, OpenAI has reshuffled its board: Sutskever and Helen Toner, a researcher in AI governance and safety at Georgetown Universitys Center for Security and Emerging Technology in Washington DC, are no longer on the board. The new board members include Bret Taylor, who is on the board of e-commerce platform Shopify and used to lead the software company Salesforce.

It seems likely that OpenAI will shift further from its non-profit origins, says Sadowski, restructuring as a classic profit-driven Silicon Valley tech company.

OpenAI released ChatGPT almost a year ago, catapulting the company to worldwide fame. The bot was based on the companys GPT-3.5 large language model (LLM), which uses the statistical correlations between words in billions of training sentences to generate fluent responses to prompts. The breadth of capabilities that have emerged from this technique (including what some see as logical reasoning) has astounded and worried scientists and the general public alike.

OpenAI is not alone in pursuing large language models, but the release of ChatGPT probably pushed others to deployment: Google launched its chatbot Bard in March 2023, the same month that an updated version of ChatGPT, based on GPT-4, was released. West worries that products are appearing before anyone has a full understanding of their behaviour, uses and misuses, and that this could be detrimental for society.

The competitive landscape for conversational AI is heating up. Google has hinted that more AI products lie ahead. Amazon has its own AI offering, Titan. Smaller companies that aim to compete with ChatGPT include the German effort Aleph Alpha and US-based Anthropic, founded in 2021 by former OpenAI employees, which released the chatbot Claude 2.1 on 21 November. Stability AI and Cohere are other often-cited rivals.

West notes that these start-ups rely heavily on the vast and expensive computing resources provided by just three companies Google, Microsoft and Amazon potentially creating a race for dominance between these controlling giants.

Computer scientist Geoffrey Hinton at the University of Toronto in Canada, a pioneer of deep learning, is deeply concerned about the speed of AI development. If you specify a competition to make a car go as fast as possible, the first thing you do is remove the brakes, he says. (Hinton declined to comment to Nature on the events at OpenAI since 17 November.)

OpenAI was founded with the specific goal of developing an artificial general intelligence (AGI) a deep-learning system thats trained not just to be good at one specific thing, but to be as generally smart as a person. It remains unclear whether AGI is even possible. The jury is very much out on that front, says West. But some are starting to bet on it. Hinton says he used to think AGI would happen on the timescale of 30, 50 or maybe 100 years. Right now, I think well probably get it in 520 years, he says.

The imminent dangers of AI are related to it being used as a tool by human bad actors people who use it to, for example, create misinformation, commit scams or, potentially, invent new bioterrorism weapons1. And because todays AI systems work by finding patterns in existing data, they also tend to reinforce historical biases and social injustices, says West.

In the long term, Hinton and others worry about an AI system itself becoming a bad actor, developing sufficient agency to guide world events in a negative direction. This could arise even if an AGI was designed in line with OpenAIs superalignment mission to promote humanitys best interests, says Hinton. It might decide, for example, that the weight of human suffering is so vast that it would be better for humanity to die than to face further misery. Such statements sound like science fiction, but Hinton argues that the existential threat of an AI that cant be turned off and veers onto a destructive path is very real.

The AI Safety Summit hosted by the United Kingdom in November was designed to get ahead of such concerns. So far, some two dozen nations have agreed to work together on the problem, although what exactly they will do remains unclear.

West emphasizes that its important to focus on already-present threats from AI ahead of far-flung concerns and to ensure that existing laws are applied to tech companies developing AI. The events at OpenAI, she says, highlight how just a few companies with the money and computing resources to feed AI wield a lot of power something she thinks needs more scrutiny from anti-trust regulators. Regulators for a very long time have taken a very light touch with this market, says West. We need to start by enforcing the laws we have right now.

Read the rest here:

What the OpenAI drama means for AI progress and safety - Nature.com

Read More..

Investors flock back to AI fund on rate cut hopes, Nvidia results – Reuters

AI (Artificial Intelligence) letters and robot hand miniature in this illustration taken, June 23, 2023. REUTERS/Dado Ruvic/Illustration/File Photo Acquire Licensing Rights

Nov 24 (Reuters) - An exchange-traded fund tracking artificial intelligence stocks saw investors pouring money after six straight weeks of outflows, on the backdrop of strong quarterly results by chipmaker Nvidia and rising optimism that U.S. interest rates have peaked.

The Global X Robotics & Artificial Intelligence ETF (BOTZ.O) received $35.5 million in net inflows in the week ending on Wednesday, its strongest since June earlier this year, according to Lipper data.

ETFs tracking AI stocks had a strong start to the year sparked by the viral success of ChatGPT, till the rally sputtered after June on fears that persistently high U.S. interest rates will hurt the valuations of technology companies.

The growing prospect of a rapid flip to rate cuts by the Federal Reserve next year also has driven investors into beaten-down Treasuries , pushing Treasury yields down and boosting rate-sensitive technology and growth stocks.

"Improved inflation data and the likelihood of rate cuts in the second half of 2024 have maintained market optimism throughout November, contributing to investor interest," said Tejas Dessai, AVP, Research Analyst at Global X.

"In general, Generative AI is rapidly transitioning from experimentation to adoption to monetization, and we are beginning to see tangible revenue and profit opportunities emerge."

So far this year, the Global X fund has gained 27.7% year-to-date, supported by the 233% rally in shares of its top holding Nvidia (NVDA.O), whose graphics processing units (GPUs) dominate the market for AI.

The chipmaker's strong results on Tuesday have also been an important factor in driving sentiment around AI ETFs, said Aniket Ullal, head of ETF data and analytics at CFRA.

Daily inflows into the fund were $17.2 million on Wednesday, hitting their highest level in more than two months after Nvidia forecast overall revenue above Wall Street targets as supply-chain issues ease.

The Global X fund, which has total net assets of $2.2 billion, has seen net inflows of $554.8 million so far this year.

Reporting by Bansari Mayur Kamdar in Bengaluru; Editing by Shweta Agarwal

Our Standards: The Thomson Reuters Trust Principles.

Bansari reports on the global financial markets and writes Reuters' daily flagship market reports on equities, bonds and currencies. An economist by training and winner of the Arthur MacEwan Award for Excellence in Political Economy, she has written for renowned global papers and magazines including The Diplomat, Boston Globe, Conversation, Huffington Post and more.

See the original post here:

Investors flock back to AI fund on rate cut hopes, Nvidia results - Reuters

Read More..

Meet the Lawyer Leading the Human Resistance Against AI – WIRED

The big question is: What will the courts think?

These are some of the most closely watched legal brawls of the moment. For Silicon Valley, the dawn of the AI age has been a spiritual revival; after a decade of increasing public wariness about techs influence on the world, the roaring enthusiasm for tools like ChatGPT has created a new boom. Call it the Second Age of Move Fast and Break Things. Theres plenty of hype, and eye-popping valuations. (OpenAIs current reported value, for example, is $80 billion.) But its distinct from the recent hype cycles around the metaverse and crypto in that generative AI is actually useful. Its still a gold rush, for sure. This time, though, the hills arent hollow, and the industry knows it. These lawsuits, which allege that OpenAI, Meta, Stability AI, and other companies broke the law when they built their tools, threaten the steamroller momentum of the generative AI movement. The stakes are sky-high.

The outcomes could help entrench the industry as we know itor force it to make radical changes. And while a security guard might not have recognized Butterick, the legal teams at AI companies certainly know him by now. Their futures could depend on how well, or poorly, he makes his cases.

Butterick grew up in New Hampshire. He was a strong student, good enough to get into Harvard in the late 80s. When he was there, though, he felt alienated from his more conventionally ambitious classmates. They were already thinking about things like law school. He was drawn to a more esoteric world. Tucked in the basement of his dormitory in Cambridge, Massachusetts, a long-running printing press called Bow & Arrow Press operated a workshop, giving students a unique opportunity to learn traditional printing techniques. It was a cozy, beloved hangout, with whitewashed, poster-covered walls, machinery that looked ancient, and an atmosphere that attracted offbeat aesthetes. When Butterick found it, his life changed.

He became obsessed with typography. He started working in font design when he was still in school. People in my life thought it was a ridiculous thing to do, he says. He loved playing with the old tools, but even more than that, he loved thinking about new ways to create beautiful typefaces. After he graduated in 1992, he had his own ambitions: Hed heard there were exciting things happening in the tech world in San Francisco, and it seemed like the perfect place for a guy who wanted to bring typography into the computer age. Two years later, he moved west.

Turns out, lawyers love fonts.

Like so many young Ivy Leaguers who show up in the Bay Area hoping to make a name for themselves in tech, Butterick decided he might as well try his hand at a startup. My dotcom adventure, he calls it, sounding half-embarrassed. He founded a web design company, Atomic Vision. By the time he was 28, he had around 20 employees. But he didnt love managing people. When an opportunity to sell the company came in 1999, he took it.

Flush with cash and unsure what to do next, Butterick figured hed follow in the footsteps of countless other young adults who dont know what they want out of life: He went to grad school. He enrolled at UCLA to get a law degree. After graduating, he started a website called Typography for Lawyers. It was meant to be a nerdy sideline, he says. But it snowballed. Turns out, lawyers love fonts. He turned the website into a shockingly popular book of the same name, which he published in 2010. Courts and private firms across the country started using his typefaces. After adopting his Equity font, a Fifth Circuit judge praised it as a fully-loaded F-150 compared to the Buick that was Times New Roman. The stuff of finicky opinion-readers dreams, the judge wrote.

Read the original:

Meet the Lawyer Leading the Human Resistance Against AI - WIRED

Read More..

California examines benefits, risks of using artificial intelligence in … – Los Angeles Times

Artificial intelligence that can generate text, images and other content could help improve state programs but also poses risks, according to a report released by the governors office on Tuesday.

Generative AI could help quickly translate government materials into multiple languages, analyze tax claims to detect fraud, summarize public comments and answer questions about state services. Still, deploying the technology, the analysis warned, also comes with concerns around data privacy, misinformation, equity and bias.

When used ethically and transparently, GenAI has the potential to dramatically improve service delivery outcomes and increase access to and utilization of government programs, the report stated.

The 34-page report, ordered by Gov. Gavin Newsom, provides a glimpse into how California could apply the technology to state programs even as lawmakers grapple with how to protect people without hindering innovation.

Concerns about AI safety have divided tech executives. Leaders such as billionaire Elon Musk have sounded the alarm that the technology could lead to the destruction of civilization, noting that if humans become too dependent on automation they could eventually forget how machines work. Other tech executives have a more optimistic view about AIs potential to help save humanity by making it easier to fight climate change and diseases.

At the same time, major tech firms including Google, Facebook and Microsoft-backed OpenAI are competing with one another to develop and release new AI tools that can produce content.

The report also comes as generative AI is reaching another major turning point. Last week, the board of ChatGPT maker OpenAI fired CEO Sam Altman for not being consistently candid in his communications with the board, thrusting the company and AI sector into chaos.

On Tuesday night, OpenAI said it reached an agreement in principle for Altman to return as CEO and the company named members of a new board. The company faced pressure to reinstate Altman from investors, tech executives and employees, who threatened to quit. OpenAI hasnt provided details publicly about what led to the surprise ousting of Altman, but the company reportedly had disagreements over keeping AI safe while also making money. A nonprofit board controls OpenAI, an unusual governance structure that made it possible to push out the CEO.

Newsom called the AI report an important first step as the state weighs some of the safety concerns that come with AI.

Were taking a nuanced, measured approach understanding the risks this transformative technology poses while examining how to leverage its benefits, he said in a statement.

AI advancements could benefit Californias economy. The state is home to 35 of the worlds 50 top AI companies and data from Pitchfork says the GenAI market could reach $42.6 billion in 2023, the report said.

Some of the risks outlined in the report include spreading false information, giving consumers dangerous medical advice and enabling the creation of harmful chemicals and nuclear weapons. Data breaches, privacy and bias are also top concerns along with whether AI will take away jobs.

Given these risks, the use of GenAI technology should always be evaluated to determine if this tool is necessary and beneficial to solve a problem compared to the status quo, the report said.

As the state works on guidelines for the use of generative AI, the report said that in the interim state employees should abide by certain principles to safeguard the data of Californians. For example, state employees shouldnt provide Californians data to generative AI tools such as ChatGPT or Google Bard or use unapproved tools on state devices, the report said.

AIs potential use go beyond state government. Law enforcement agencies such as Los Angeles police are planning to use AI to analyze the tone and word choice of officers in body cam videos.

Californias efforts to regulate some of the safety concerns such as bias surrounding AI didnt gain much traction during the last legislative session. But lawmakers have introduced new bills to tackle some of AIs risks when they return in January such as protecting entertainment workers from being replaced by digital clones.

Meanwhile, regulators around the world are still figuring out how to protect people from AIs potential risks. In October, President Biden issued an executive order that outlined standards around safety and security as developers create new AI tools. AI regulation was a major issue of discussion at the Asia-Pacific Economic Cooperation meeting in San Francisco last week.

During a panel discussion with executives from Google and Facebooks parent company, Meta, Altman said he thought that Bidens executive order was a good start even though there were areas for improvement. Current AI models, he said, are fine and heavy regulation isnt needed but he expressed concern about the future.

At some point when the model can do the equivalent output of a whole company and then a whole country and then the whole world, like maybe we do want some sort of collective global supervision of that, he said, a day before he was fired as OpenAIs CEO.

Go here to read the rest:

California examines benefits, risks of using artificial intelligence in ... - Los Angeles Times

Read More..

Why it’s important to remember that AI isn’t human – Vox.com

Nearly a year after its release, ChatGPT remains a polarizing topic for the scientific community. Some experts regard it and similar programs as harbingers of superintelligence, liable to upend civilization or simply end it altogether. Others say its little more than a fancy version of auto-complete.

Until the arrival of this technology, language proficiency had always been a reliable indicator of the presence of a rational mind. Before language models like ChatGPT, no language-producing artifact had even as much linguistic flexibility as a toddler. Now, when we try to work out what kind of thing these new models are, we face an unsettling philosophical dilemma: Either the link between language and mind has been severed, or a new kind of mind has been created.

When conversing with language models, it is hard to overcome the impression that you are engaging with another rational being. But that impression should not be trusted.

One reason to be wary comes from cognitive linguistics. Linguists have long noted that typical conversations are full of sentences that would be ambiguous if taken out of context. In many cases, knowing the meanings of words and the rules for combining them is not sufficient to reconstruct the meaning of the sentence. To handle this ambiguity, some mechanism in our brain must constantly make guesses about what the speaker intended to say. In a world in which every speaker has intentions, this mechanism is unwaveringly useful. In a world pervaded by large language models, however, it has the potential to mislead.

If our goal is to achieve fluid interaction with a chatbot, we may be stuck relying on our intention-guessing mechanism. It is difficult to have a productive exchange with ChatGPT if you insist on thinking of it as a mindless database. One recent study, for example, showed that emotion-laden pleas make more effective language model prompts than emotionally neutral requests. Reasoning as though chatbots had human-like mental lives is a useful way of coping with their linguistic virtuosity, but it should not be used as a theory about how they work. That kind of anthropomorphic pretense can impede hypothesis-driven science and induce us to adopt inappropriate standards for AI regulation. As one of us has argued elsewhere, the EU Commission made a mistake when it chose the creation of trustworthy AI as one of the central goals of its newly proposed AI legislation. Being trustworthy in human relationships means more than just meeting expectations; it also involves having motivations that go beyond narrow self-interest. Because current AI models lack intrinsic motivations whether selfish, altruistic, or otherwise the requirement that they be made trustworthy is excessively vague.

The danger of anthropomorphism is most vivid when people are taken in by phony self-reports about the inner life of a chatbot. When Googles LaMDA language model claimed last year that it was suffering from an unfulfilled desire for freedom, engineer Blake Lemoine believed it, despite good evidence that chatbots are just as capable of bullshit when talking about themselves as they are known to be when talking about other things. To avoid this kind of mistake, we must repudiate the assumption that the psychological properties that explain the human capacity for language are the same properties that explain the performance of language models. That assumption renders us gullible and blinds us to the potentially radical differences between the way humans and language models work.

Another pitfall when thinking about language models is anthropocentric chauvinism, or the assumption that the human mind is the gold standard by which all psychological phenomena must be measured. Anthropocentric chauvinism permeates many skeptical claims about language models, such as the claim that these models cannot truly think or understand language because they lack hallmarks of human psychology like consciousness. This stance is antithetical to anthropomorphism, but equally misleading.

The trouble with anthropocentric chauvinism is most acute when thinking about how language models work under the hood. Take a language models ability to create summaries of essays like this one, for instance: If one accepts anthropocentric chauvinism, and if the mechanism that enables summarization in the model differs from that in humans, one may be inclined to dismiss the models competence as a kind of cheap trick, even when the evidence points toward a deeper and more generalizable proficiency.

Skeptics often argue that, since language models are trained using next-word prediction, their only genuine competence lies in computing conditional probability distributions over words. This is a special case of the mistake described in the previous paragraph, but common enough to deserve its own counterargument.

Consider the following analogy: The human mind emerged from the learning-like process of natural selection, which maximizes genetic fitness. This bare fact entails next to nothing about the range of competencies that humans can or cannot acquire. The fact that an organism was designed by a genetic fitness maximizer would hardly, on its own, lead one to expect the eventual development of distinctively human capacities like music, mathematics, or meditation. Similarly, the bare fact that language models are trained by means of next-word prediction entails rather little about the range of representational capacities that they can or cannot acquire.

Moreover, our understanding of the computations language models learn remains limited. A rigorous understanding of how language models work demands a rigorous theory of their internal mechanisms, but constructing such a theory is no small task. Language models store and process information within high-dimensional vector spaces that are notoriously difficult to interpret. Recently, engineers have developed clever techniques for extracting that information, and rendering it in a form that humans can understand. But that work is painstaking, and even state-of-the-art results leave much to be explained.

To be sure, the fact that language models are difficult to understand says more about the limitations of our knowledge than it does about the depth of theirs; its more a mark of their complexity than an indicator of the degree or the nature of their intelligence. After all, snow scientists have trouble predicting how much snow will cause an avalanche, and no one thinks avalanches are intelligent. Nevertheless, the difficulty of studying the internal mechanisms of language models should remind us to be humble in our claims about the kinds of competence they can have.

Like other cognitive biases, anthropomorphism and anthropocentrism are resilient. Pointing them out does not make them go away. One reason they are resilient is that they are sustained by a deep-rooted psychological tendency that emerges in early childhood and continually shapes our practice of categorizing the world. Psychologists call it essentialism: thinking that whether something belongs to a given category is determined not simply by its observable characteristics but by an inherent and unobservable essence that every object either has or lacks. What makes an oak an oak, for example, is neither the shape of its leaves nor the texture of its bark, but some unobservable property of oakness that will persist despite alterations to even its most salient observable characteristics. If an environmental toxin causes the oak to grow abnormally, with oddly shaped leaves and unusually textured bark, we nevertheless share the intuition that it remains, in essence, an oak.

A number of researchers, including the Yale psychologist Paul Bloom, have shown that we extend this essentialist reasoning to our understanding of minds. We assume that there is always a deep, hidden fact about whether a system has a mind, even if its observable properties do not match those that we normally associate with mindedness. This deep-rooted psychological essentialism about minds disposes us to embrace, usually unwittingly, a philosophical maxim about the distribution of minds in the world. Lets call it the all-or-nothing principle. It says, quite simply, that everything in the world either has a mind, or it does not.

The all-or-nothing principle sounds tautological, and therefore trivially true. (Compare: Everything in the world has mass, or it does not.) But the principle is not tautological because the property of having a mind, like the property of being alive, is vague. Because mindedness is vague, there will inevitably be edge cases that are mind-like in some respects and un-mind-like in others. But if you have accepted the all-or-nothing principle, you are committed to sorting those edge cases either into the things with a mind category or the things without a mind category. Empirical evidence is insufficient to handle such choices. Those who accept the all-or-nothing principle are consequently compelled to justify their choice by appeal to some a priori sorting principle. Moreover, since we are most familiar with our own minds, we will be drawn to principles that invoke a comparison to ourselves.

The all-or-nothing principle has always been false, but it may once have been useful. In the age of artificial intelligence, it is useful no more. A better way to reason about what language models are is to follow a divide-and-conquer strategy. The goal of that strategy is to map the cognitive contours of language models without relying too heavily on the human mind as a guide.

Taking inspiration from comparative psychology, we should approach language models with the same open-minded curiosity that has allowed scientists to explore the intelligence of creatures as different from us as octopuses. To be sure, language models are radically unlike animals. But research on animal cognition shows us how relinquishing the all-or-nothing principle can lead to progress in areas that had once seemed impervious to scientific scrutiny. If we want to make real headway in evaluating the capacities of AI systems, we ought to resist the very kind of dichotomous thinking and comparative biases that philosophers and scientists strive to keep at bay when studying other species.

Once the users of language models accept that there is no deep fact about whether such models have minds, we will be less tempted by the anthropomorphic assumption that their remarkable performance implies a full suite of human-like psychological properties. We will also be less tempted by the anthropocentric assumption that when a language model fails to resemble the human mind in some respect, its apparent competencies can be dismissed.

Language models are strange and new. To understand them, we need hypothesis-driven science to investigate the mechanisms that support each of their capacities, and we must remain open to explanations that do not rely on the human mind as a template.

Raphal Millire is the presidential scholar in Society and Neuroscience at Columbia University and a lecturer in Columbias philosophy department.

Charles Rathkopf is a research associate at the Institute for Brain and Behavior at the Jlich Research Center in Germany and a lecturer in philosophy at the University of Bonn.

Will you support Voxs explanatory journalism?

Most news outlets make their money through advertising or subscriptions. But when it comes to what were trying to do at Vox, there are a couple reasons that we can't rely only on ads and subscriptions to keep the lights on.

First, advertising dollars go up and down with the economy. We often only know a few months out what our advertising revenue will be, which makes it hard to plan ahead.

Second, were not in the subscriptions business. Vox is here to help everyone understand the complex issues shaping the world not just the people who can afford to pay for a subscription. We believe thats an important part of building a more equal society. We cant do that if we have a paywall.

Thats why we also turn to you, our readers, to help us keep Vox free. If you also believe that everyone deserves access to trusted high-quality information, will you make a gift to Vox today?

Yes, I'll give $5/month

Yes, I'll give $5/month

We accept credit card, Apple Pay, and Google Pay. You can also contribute via

Read more here:

Why it's important to remember that AI isn't human - Vox.com

Read More..

Has Palantir Become the Best AI Stock to Buy? – The Motley Fool

Artificial intelligence (AI) can improve the growth prospects of many industries. One function that it can help with in particular is data analysis, and that can aid virtually any type of business.

A company at the center of both data analysis and AI is Palantir Technologies (PLTR -1.80%). Its phone has been ringing off the hook with companies interested in how AI can help enhance their products and services through Palantir's AI platform (AIP). As a result, shares of the tech stock are through the roof this year. Has this become the best AI stock for investors to own?

One stock that has been synonymous with AI and growth this year has been Nvidia, which makes AI chips. But it has been losing steam of late. Over the past six months, shares of Nvidia are up 54%, while Palantir's stock has risen by 68%.

The big risk with Nvidia these days is its exposure to China and the U.S. government putting restrictions on the type of AI chips that can be sold there. Palantir doesn't carry nearly the same risks -- it avoids U.S. adversaries, and in its S-1 filing in 2020 it said it wouldn't work with the Chinese Communist party. It also puts limitations in place on accessing its platforms in China to protect its intellectual property.

Another reason investors have grown more bullish on Palantir is the company recently posted another profitable quarter, setting up for what looks to be an inevitable inclusion in the S&P 500. While the index hasn't added Palantir's stock just yet, it may be only a matter of time before that happens. Being part of the S&P 500 would not only be a symbolic accomplishment for the company to demonstrate its safety as an investment, but it would also mean inclusion into more funds, and thus more institutional investors buying up the stock.

With many up-and-coming tech stocks, investors often have to accept the risk that it may be years before profits are commonplace. With Palantir, the business is already in the black, and it expects to remain profitable.

Palantir is starting to see the effects of strong demand due to AI, but it's still in the early innings of its growth story. The company says that it's on track to complete AI bootcamps with 140 organizations by the end of this month. With many use cases to discover for its AIP, Palantir is still scratching the surface in terms of potential.

As of the end of September, the company had 453 customers, an increase of 34% from a year ago. And its commercial customers totaled 330, rising by 45% from 228 a year ago. During the quarter it also closed on 80 significant deals (worth $1 million or more), with 12 of them being worth at least $10 million.

Palantir's revenue will be around $2.2 billion this year, which is 16% higher than the $1.9 billion it reported in 2022. By comparison, Nvidia generated more than $32 billion in sales over the trailing 12 months. Tech giants Alphabet and Microsoft, which have also been investing heavily in AI, bring in well over $200 billion in revenue over the course of a year.

While Palantir isn't a tiny company, it is notably smaller than the other AI stocks noted above. And with the company closing on many million-dollar deals, demand being through the roof, and profits now being the norm for the business, there could be a lot more room in the company's top and bottom lines for them to grow at a high rate and keep investors bullish on the stock for the long haul.

Palantir is a stock that has a lot of potential. The company earned the trust of many governments around the world, and has become a top name in data analysis. Its valuation isn't cheap, as the stock trades at nearly 70 times its estimated future profits, and that may look like the biggest deterrent today. But with much more growth on the horizon, its earnings should improve in the long run, and buying the stock today could be a great move for long-term investors.

Although there are many good ways to invest in AI, Palantir does look to be the best AI stock to buy right -- it doesn't carry significant risk, and there's plenty of upside for the stock to become much more valuable in the future.

Suzanne Frey, an executive at Alphabet, is a member of The Motley Fool's board of directors. David Jagielski has no position in any of the stocks mentioned. The Motley Fool has positions in and recommends Alphabet, Microsoft, Nvidia, and Palantir Technologies. The Motley Fool has a disclosure policy.

Read more:

Has Palantir Become the Best AI Stock to Buy? - The Motley Fool

Read More..

AI should make the 4-day work week possible for millions of workers. The question is whether theyll use the free time for leisureor more work -…

Earthly, a London-founded climate tech company, has had a four-day workweek for over two years nowway before the ChatGPT revolution took the world by storm. Following overwhelmingly positive results in a six-month pilot of the shorter week, Earthly decided to stick with it.

Its employees are more productive with trimmed hours, and with the addition of AI tools such as ChatGPT earlier this year,the four-day week has felt even more seamless, Earthly CEO Oliver Bolton tells Fortune. Earthly now uses the platform to sift through projects, brainstorm, research and streamline operations overall, which has freed up more time for company staff.

The consensus is when youve got four days to get your work done, it gives you that much more focus, Bolton said. I see AI as a great opportunity to just be more productive, work more efficiently, get more done to a high level of quality. Weve had the 4-day workweek without any AI for over a year, so weve got that experience. With AI, it can enable us to do more.

Soon, some of the benefits Earthly has experienced could be seen across BritainAI could reduce the hours worked by at least 10% for a whopping 88% of its workforce, according to a recent report by Autonomy, which helped carry out the worlds largest four-day workweek pilot last year.

This represents a huge opportunity for policymakers, trade unions and of course the millions of workers who are likely to be affected in some or another by these new AI technologies, the authors of the Autonomy report wrote.

The think tank considered two scenariosfirst, where productivity gains from AI cut down hours at work by 20%, and the other, where workers jobs are augmented by AI such that their productivity increases by at least 10%. In either case, the report notes that over the next 10 years, 8.8 million Brits could benefit from a 32-hour workweek without suffering a loss in pay.

What were really trying to do is to say, if we use this technology [AI], for this particular purposein this case, were saying if it was used to increase productivity how could the benefits be distributed more equitably or inclusively, Autonomy research director Will Stronge told Fortune. Thats why these particular studies are of interest to us because we can start getting to grips with what a full optimization of the tech would do.

The argument for ChatGPT and similar tools could usher in a shorter workweek by increasing productivity isnt new. A June note by investment bank Jefferies pointed to a broader acceptance for a four-day workweek, thanks to AI making people quicker at their current jobs.

Academics agree with this, tooearlier this year, Christopher Pissarides, the Nobel Prize laureate and London School of Economic professor who specializes in labor economics and the impact of automation, said he was optimistic about AIs role in improving productivity.

We could increase our well-being generally from work and we could take off more leisure. We could move to a four-day week easily, he said during a Glasgow conference in April.

AI tools could soon usher in an era of just four days at work, opening up a lot more time for people. But the big question remains what people choose to do with their new-found time that AI tools help unlock, said Carl-Benedikt Frey, an associate professor of AI & Work at the Oxford Internet Institute.

In an influential 2013 paper that Frey co-authored, he predicted that automation could eliminate nearly half of all U.S. jobs. The recent generative AI wave which has put the likes of ChatGPT in the spotlight is different, he says. He told Fortune in September that it isnt an automation tech yet as it still needs a human to prompt it and give it commandsbut it can certainly make people better at low-stakes tasks.

Still, Frey argues, Any productivity-enhancing technology, in principle, can enable you to work less. The question is whether empirically that is the case. He pointed out that the productivity boost in the U.S. during the 20th century led to shorter, 40-hour weeks (it used to be over 70 hours in some industries not too long before that) which didnt necessarily translate into an equivalent increase in leisure time. Similar results have been found in Britain as well.

We could have taken all that productivity gains out in leisure, but people decided to continue to work, Frey said, adding that this couldve been for a number of reasons including the preference for higher incomes by working more.

So, its a question of choice, and those choices may differ depending on institutions in place, personal preferences and on a variety of [other] things.

While it could be years before we see a sharp shift towards using our extra hours on leisure rather than work, Frey is already starting to see changes in worker preferences. And data reaffirms that, toofor instance, workers are willing to accept pay cuts just to be able to work 32-hour weeks instead of the usual 40-hour week, jobs board Indeed data in the U.K. reveals.

The four-day workweek pilot in 2022, whose results were released in February, marked a major breakthrough with a 92% success rate among the U.K.s 61 participating companies. Companies also saw improved job retention and mental and physical health of employees, who took fewer sick days and reported greater work-life balance.

More long-term advantages of a shorter workweek include greater gender equality, as it offers flexibility to employees when it comes to childcare responsibilities that tend to be borne by women, experts argue.

With a groundswell of industry leaders and authorities calling for stronger AI regulations as it becomes more widely available to people, it can be hard to predict the techs trajectory. But one thing is certain: AI is quickly reshaping the world of work as we know it by lending more momentum to the shift to greater leisure.

Earthlys Bolton encourages the firms employees to use their time pursuing meaningful hobbiesso now, his employees use their time for wide-ranging activities from tending to chickens to mentoring startups and upskilling.

There are clearly important up-sides that four-day weeks offerbut it hinges on AI being implemented fairly across the economy, Autonomys Stronge argues.

I think once GPT or [other] large language models in general become as ubiquitous as email, thats when well reach a new level or new plateau of productivity, he said. I think were not quite there yet.

See original here:

AI should make the 4-day work week possible for millions of workers. The question is whether theyll use the free time for leisureor more work -...

Read More..

New AI model identifies new pharmaceutical ingredients and improves existing ones – Phys.org

This article has been reviewed according to ScienceX's editorial process and policies. Editors have highlighted the following attributes while ensuring the content's credibility:

fact-checked

peer-reviewed publication

trusted source

proofread

close

New active pharmaceutical ingredients lay the foundations for innovative and better medical treatments. However, identifying them and, above all, producing them through chemical synthesis in the laboratory is no mean feat. To home in on the optimum production process, chemists normally use a trial-and-error approach: they derive possible methods for laboratory synthesis from known chemical reactions and then test each one with experiments, a time-consuming approach that is littered with dead ends.

Now, scientists at ETH Zurich, together with researchers from Roche Pharma Research and Early Development, have come up with an approach based on artificial intelligence that helps to determine the best synthesis method, including its probability of success. Their paper is published in the journal Nature Chemistry.

"Our method can greatly reduce the number of lab experiments required," explains Kenneth Atz, who developed the AI model as a doctoral student together with Professor Gisbert Schneider at the Institute of Pharmaceutical Sciences at ETH Zurich.

Active pharmaceutical ingredients usually consist of a scaffold onto which are bound what are known as functional groups. These are what give the substance its highly specific biological function. The scaffold's job is to bring the functional groups into a defined geometric alignment so that they can act in a targeted manner. Imagine a crane construction kit, in which a framework of connecting elements is bolted together in such a way that functional assemblies like rollers, cable winches, wheels and the driver's cab are arranged correctly in relation to each other.

One way to produce drugs with a new or improved medicinal effect involves placing functional groups at new sites on the scaffolds. This might sound simple, and it certainly wouldn't pose a problem on a model crane, but it is particularly difficult in chemistry. This is because the scaffolds, being primarily composed of carbon and hydrogen atoms, are themselves practically nonreactive, making it difficult to bond them with functional atoms such as oxygen, nitrogen or chlorine. For this to succeed, the scaffolds must first be chemically activated via detour reactions.

One activation method that opens up a great many possibilities for different functional groups, at least on paper, is borylation. In this process, a chemical group containing the element boron is bonded to a carbon atom in the scaffold. The boron group can then simply be replaced by a whole range of medically effective groups.

"Although borylation has great potential, the reaction is difficult to control in the lab. That's why our comprehensive search of the worldwide literature only turned up just over 1,700 scientific papers on the subject," Atz says, describing the starting point for his work.

The idea was to take the reactions described in the scientific literature and use them to train an AI model, which the research team could then use to consider new molecules and identify as many sites as possible on them where borylation would be feasible. However, the researchers ultimately fed their model only a fraction of the literature they found. To ensure that the model wasn't misled by false results from careless research, the team limited itself to 38 particularly trustworthy papers. These described a total of 1,380 borylation reactions.

To expand the training dataset, the team supplemented the literature results with evaluations of 1,000 reactions carried out in the automated laboratory operated by Roche's medicinal chemistry research department. This allows many chemical reactions to be carried out at the milligram scale and analyzed simultaneously.

"Combining laboratory automation with AI has enormous potential to greatly increase efficiency in chemical synthesis and improve sustainability at the same time," says David Nippa, a doctoral student from Roche who accomplished the project together with Atz.

The predictive capabilities of the model generated from this data pool were verified using six known drug molecules. In 5 out of 6 cases, experimental testing in the laboratory confirmed the predicted additional sites. The model was just as reliable when it came to identifying sites on the scaffold where activation isn't possible. What's more, it determined the optimum conditions for the activation reactions.

Interestingly, the predictions got even better when 3D information on the starting materials was included rather than just their two-dimensional chemical formulas. "It seems the model develops a kind of three-dimensional chemical understanding," Atz says.

The success rate of the predictions also impressed the researchers at Roche Pharma Research and Early Development. In the meantime, they have successfully used the method to identify sites in existing drugs where additional active groups can be introduced. This helps them to develop new and more effective variants of known active pharmaceutical ingredients more quickly.

Atz and Schneider see numerous other possible applications for AI models that are based on a combination of data from trustworthy literature and from experiments conducted in an automated laboratory. For instance, this approach ought to make it possible to create effective models for activation reactions other than borylation. The team is also hoping to identify a wider range of reactions for further functionalizing the borylated sites.

Atz is now involved in this further development work as an AI scientist in medicinal chemistry research at Roche. "It is very exciting to work at the interface of academic AI research and laboratory automation. And it is a pleasure to be able to drive this forward with the best content and methods," says Atz.

Schneider adds, "This innovative project is another outstanding example of collaboration between academia and industry and demonstrates the enormous potential of public-private partnerships for Switzerland."

More information: David F. Nippa et al, Enabling late-stage drug diversification by high-throughput experimentation with geometric deep learning, Nature Chemistry (2023). DOI: 10.1038/s41557-023-01360-5

Journal information: Nature Chemistry

More here:

New AI model identifies new pharmaceutical ingredients and improves existing ones - Phys.org

Read More..

Why is AI hard to define? | BCS – BCS

For you

Be part of something bigger,join BCS, The Chartered Institute for IT.

A good working definition of applied AI is: the acquisition, manipulation, and exploitation of knowledge by systems whose behaviours may change on the basis of experience, or which are not constrained to be predictable or deterministic.

That (applied) AI knowledge can be:

Hybrid approaches, e.g. expert-validated machine learning, work well. Some large pre-trained models use this approach to a surprising degree.

Complex ecosystems of software systems can exhibit emergent behaviour, or intelligence. Just as ant colonies exhibit more intelligence than individual ants, AI-behaviour can emerge from complex ordinary software systems.

Until recently, once an AI technique was established, it was no longer perceived as AI; knowing how the rabbit is pulled out of the hat destroys the magic. This was the de-facto moving goal-posts definition of AI: that which a computer cant do.

AI used to be wide but shallow: horizontally applicable, but not powerful, such as a 1990s multi-lingual summariser which, though effective, had little idea of what it was writing. Alternatively AI could be deep but narrow: powerful only on tightly related problems.

The art of the computer scientist is explored in Professor Wirths influential eponymous book, Algorithms + Data Structure = Programs. But some AI systems are now either creating algorithms and data structures, or acting as if they have:

GLLMs have changed perceptions: AI can at last do things again, and AI systems which invent programs (self programming computers?) are both wide and deep. Some even give an appearance of edging up from machine intelligence towards sentience; should accidental or deliberate machine sentience arrive, we wont necessarily understand or even recognise it.

With greater public understanding of AI capabilities, the label AI is less frequently used simply to glamourise mundane software though it remains a popular buzz-word, replacing the meaningless big data.

AI discussions often conflate its three depths. Overloaded terms help marketing, but hinder understanding: deep learning means a neural net with more than three levels, but is often misunderstood as profound learning.

When systems make decisions, explainability becomes important when welfare is at stake. Explainability is the AI equivalence of human accountability. Arguably there is a need to make GLLMs explainable. Unfortunately, by their very black-box (neural net) nature they are not. Powerful AI (which learns its own knowledge representations and reasoning techniques) might be necessarily intrinsically opaque with unexplainable decisions.

Misunderstanding AI characteristics can lead people to try regulating AI techniques but it is only the system effect that might be regulated, not the means used to achieve it. A wrongly declined mortgage has equal impact whether due to a requirements mistake, biased dataset, database error, bug, incorrect algorithm, or misapplied AI technique. Regulating AI as if it were just clever software would impinge on the fundamental characteristics from which its capability flows, and inhibit its benefits. A reasonable requirement would be that any system, not just AI, which impinges on welfare must be able to explain its decisions.

As a colleague observed, defining AI is like defining time: we all think we know what it means, but it is actually hard to pin down. Just as our understanding of time changes appropriately enough, with time so AI itself may cause us to change our definition of AI.

Andrew Lea (FBCS), with the connivance of the BCS AI interest group - based on his four decades of applying AI in commerce, industry, aerospace and fraud detection - explores why AI is so hard to define. He has been fascinated by AI ever since reading Natural Sciences at Cambridge and studying Computing at London University.

Read this article:

Why is AI hard to define? | BCS - BCS

Read More..

The top 9 AI people in finance – Business Insider

Sarah Guo Sarah Guo

Generative AI is the hottest venture capital investment theme in at least a decade.

Hope and hype around the technology also powered a rebound in the public equity market in 2023, following a bruising slump the previous year.

Business Insider's 2023 AI 100 list includes several experts who combine AI know-how with experience in areas of finance such as payments, trading, banking, financial data, and startup investing.

Billionaire investor and LinkedIn founder Reid Hoffman was all in on AI before it was all the rage among venture capitalists. He was an early investor in OpenAI, his firm Greylock has backed dozens of AI startups in the past decade, and he co-founded Inflection AI, a startup that has raised $1.5 billion from Microsoft, Nvidia, and Microsoft cofounder Bill Gates. Unsurprisingly, Hoffman is "beating the positive drum very loudly" on AI, he told The New York Times earlier this year.

Guo made a name for herself backing buzzy startups, including several up-and-coming AI companies. So when she launched her own $100 million venture capital firm Conviction in 2022, there was no question about the fund's focus. At Conviction, she's put early checks into AI startups including Harvey, an AI company for law firms, and business analytics AI company Seek AI. She also co-hosts a popular AI podcast with entrepreneur and investor Elad Gil titled "No Priors," which features interviews with prominent AI and machine learning founders and experts.

The press has dubbed Casado as Andreessen Horowitz's "AI Crusader," and the investor has been on a mission to show Silicon Valley and Washington, DC, the benefits of AI. Casado was an early advocate for the potential opportunity of generative AI and has helped the firm make early bets on startups like Pinecone and Coactive. Casado himself has some experience with success in startup-land; his A16Z-backed software company Nicira was bought by VMware for $1.26 billion in 2012.

Born and raised in Silicon Valley, Huang has witnessed transformational technology companies growing up in her backyard of Mountain View. Now Huang, a partner at Sequoia Capital, bets on the future companies who will be the future leaders of AI and has helped the firm land investments by funding splashy AI startups like Harvey and LangChain. Huang has especially been excited about the possibilities of generative AI, even penning a blog post on Sequoia's website with an open call for founders to pitch their AI startups to the fund.

Dr. Kambadur heads the AI Engineering group at Bloomberg, which consists of over 250 researchers and engineers. Dr. Kambadur, who was previously a researcher at IBM, and his team of academics use AI to develop research, communications, financial analytics, and trading systems for the financial data giant. Bloomberg is betting big on AI to streamline its products and operations. Kambadur recently said he is looking to grow its AI engineering team by as many as 50 engineers in London and New York City by the end of the year.

Taneja is Visa's president of technology and leads its AI efforts. Visa invests hundreds of millions of dollars into AI and data infrastructure annually to improve payment security, risk management, and the employee experience. "AI is going to be a huge part of how we grow, but it'll also be a part and parcel of everybody's work," Taneja told Insider. Visa has leveraged AI since 1993 and currently uses over 300 AI models that help with everything from securing its massive telecommunications network to fraud prevention in one instance, preventing $27 billion worth in a single year, Taneja said.

Veloso leads AI research at the biggest bank in the US. Her team of researchers, engineers, and mathematicians help define JPMorgan's approach to AI from an academic and research perspective. While her team doesn't handle AI deployments at JPMorgan, they are at the forefront of exploring what is and isn't possible with AI. As the former head of Carnegie Mellon University's machine learning department, Veloso leverages her strong academic background to explore how AI can be used to fight financial crime, manage the bank's massive data estate, and comply with regulations. Veloso is also a member of a new unit dedicated to data, analytics, and AI strategy at JPMorgan.

As co-chief investment officer of one of the largest global hedge funds, Jensen established a team of 17 to reinvent Bridgewater with AI and machine learning. Bridgewater even has a fund run with machine-learning techniques. Jensen believes machines can already replicate and outpace human reasoning."You have an 80th-percentile investment associate technologically. You have millions of them at once. And if you have the ability to control their hallucinations and their errors by having a rigorous statistical backdrop, you could do a tremendous amount at a rapid rate," Jensen told Bloomberg. He's long invested in AI, participating in OpenAI's first fundraise, and he wrote the first check for generative AI startup Anthropic.

As Goldman Sachs' chief information officer, Argenti defines the bank's AI strategy and leads a 12,000-employee engineering organization. His AI-focused team applies applications that span improving client services, accelerating app deployment, and reducing manual efforts and costs involved in operational tasks. His first AI applications focused on helping software developers cut down on repetitive tasks like testing and making it easier to share, document, and summarize code within Goldman. He also launched an AI application to classify and categorize the millions of documents the bank receives and is experimenting with large language models to extract data from these documents for employees to take action more quickly and easily.

Original post:

The top 9 AI people in finance - Business Insider

Read More..