Why generative AI is ‘alchemy,’ not science – VentureBeat

A New York Times article this morning, titled How to Tell if Your AI Is Conscious, says that in a new report, scientists offer a list of measurable qualities based on a brand-new science of consciousness.

The article immediately jumped out at me, as it was published just a few days after I had a long chat with Thomas Krendl Gilbert, a machine ethicist who, among other things, has long studied the intersection of science and politics. Gilbert recently launched a new podcast, called The Retort, along with Hugging Face researcher Nathan Lambert, with an inaugural episode that pushes back on the idea of todays AI as a truly scientific endeavor.

Gilbert maintains that much of todays AI research cannot reasonably be called science at all. Instead, it can be viewed as a new form of alchemy that is, the medieval forerunner of chemistry, that can also be defined as a seemingly magical process of transformation.

Many critics of deep learning and of large language models, including those who built them, sometimes refer to AI as a form of alchemy, Gilbert told me on a video call. What they mean by that, he explained, is that its not scientific, in the sense that its not rigorous or experimental. But he added that he actually means something more literal when he says that AI is alchemy.

The people building it actually think that what theyre doing is magical, he said. And thats rooted in a lot of metaphors, ideas that have now filtered into public discourse over the past several months, like AGI and super intelligence. The prevailing idea, he explained, is that intelligence itself is scalar depending only on the amount of data thrown at a model and the computational limits of the model itself.

But, he emphasized, like alchemy, much of todays AI research is not necessarily trying to bewhat we know as science, either. The practice of alchemy historically had no peer review or public sharing of results, for example. Much of todays closed AI research does not, either.

It was very secretive, and frankly, thats how AI works right now, he said. Its largely a matter of assuming magical properties about the amount of intelligence that is implicit in the structure of the internet and then building computation and structuring it such that you can distill that web of knowledge that weve all been building for decades now, and then seeing what comes out.

I was particularly interested in Gilberts thoughts on alchemy given the current AI discourse, which seems to me to include some doozies of cognitive dissonance: There was the Senates closed-door AI Insight Forum, where Elon Musk called for AI regulators to serve as a referee to keep AI safe, while actively working on using AI to put microchips in human brains and making humans a multiplanetary species. There was the EU parliament saying that AI extinction risk should be a global priority, while at the same time, OpenAI CEO Sam Altman said hallucinations can be seen as positive part of the magic of generative AI and that superintelligence is simply an engineering problem.

And there was DeepMind co-founder Mustafa Suleyman, who would not explain to MIT Technology Review how his company Inflections Pi manages to refrain from toxic output Im not going to go into too many details because its sensitive, he said while calling on governments to regulate AI and appoint cabinet-level tech ministers.

Its enough to make my head spin but Gilberts take on AI as alchemy put these seemingly opposing ideas into perspective.

Gilbert clarified that he isnt saying that the notion of AI as alchemy is wrong but that its lack of scientific rigor needs to be called what it really is.

Theyre building systems that are arbitrarily intelligent, not intelligent in the way that humans are whatever that means but just arbitrarily intelligent, he explained. Thats not a well-framed problem, because its assuming something about intelligence that we have very little or no evidence of, that is an inherently mystical or supernatural claim.

AI builders, he continued, dont need to know what the mechanisms are that make the technology work, but they are interested enough and motivated enough and frankly, also have the resources enough to just play with it.

The magic of generative AI, he added, doesnt come from the model. The magic comes from the way the model is matched to the interface. The magic people like so much is that I feel like Im talking to a machine when I play with ChatGPT. Thats not a property of the model, thats a property of ChatGPT of the interface.

In support of this idea, researchers at Alphabets AI division DeepMind recently published work showing that AI can optimize its own prompts and performs better when prompted to take a deep breath and work on this problem step-by-step, though the researchers are unclear exactly why this incantation works as well as it does (especially given the fact that an AI model does not actually breathe at all.)

One of the major consequences of the alchemy of AI is when it intersects with politics as it is now with discussions around AI regulation in the US and the EU, said Gilbert.

In politics, what were trying to do is articulate a notion of what is good to do, to establish the grounds for consensus that is fundamentally whats at stake in the hearings right now, he said. We have a very rarefied world of AI builders and engineers, who are engaged in the stance of articulating what theyre doing and why it matters to the people that we have elected to represent our political interests.

The problem is that we can only guess at the work of Big Tech AI builders, he said. Were living in a weird moment, he explained, where the metaphors that compare AI to human intelligence are still being used, but the mechanisms are not remotely well understood.

In AI, we dont really know what the mechanisms are for these models, but we still talk about them like theyre intelligent. We still talk about them liketheres some kind of anthropological ground that is being uncovered and theres truly no basis for that.

But while there is no rigorous scientific evidence backing for many of the claims to existential risk from AI, that doesnt mean they arent worthy of investigation, he cautioned. In fact, I would argue that theyre highly worthy of investigation scientifically [but] when those things start to be framed as a political project or a political priority, thats a different realm of significance.

Meanwhile, the open source generative AI movement led by the likes of Meta Platforms with its Llama models, along other smaller startups such as Anyscale and Deci is offering researchers, technologists, policymakers and prospective customers a clearer window onto the inner workings of the technology. But translating the research into non-technical terminology that laypeople including lawmakers can understand, remains a significant challenge.

That is the key problem with the fact that AI, as alchemy and not science, has become a political project, Gilbert explained.

Its a laxity of public rigor, combined with a certain kind of willingness to keep your cards close to your chest, but then say whatever you want about your cards in public with no robust interface for interrelating the two, he said.

Ultimately, he said, the current alchemy of AI can be seen as tragic.

There is a kind of brilliance in the prognostication, but its not clearly matched to a regime of accountability, he said. And without accountability, you get neither good politics nor good science.

VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.

Read more here:
Why generative AI is 'alchemy,' not science - VentureBeat

Related Posts

Comments are closed.