AI is muddying the truth. Weve known how to fix it for centuries. – The Boston Globe

Not only does our concept of truth feel more slippery today, but the long-established ways we arrive at insights and decisions are being compromised. Having worked in the data and information sector for a combined five decades, we are very concerned that, left unchecked, the rapid rollout of generative AI could erode the epistemological foundations of society that is, the ways in which we construct knowledge. As the cognitive scientist Douglas Hofstadter wrote in The Atlantic, it could well undermine the very nature of truth on which our society and I mean all of human society is based.

The White Houses recent announcement that it has secured voluntary commitments from a handful of companies to improve the safety of their AI technology is a start, but does not address the fundamental risk humanity faces: the end of our ability to discern truth. As our society faces existential crises from climate change and pandemic preparedness to systemic racism and the fragility of democracy we urgently need to protect trust in evidence-based decision making.

Among the White Houses proposals to regulate generative AI is a watermarking system a step in the right direction, but one that falls far short of enforcing transparency and verifiability. Should this actually be adopted, some will see the AI watermark and reflexively discount the content as fake news; some wont see the watermark at all; and others scrolling through their social media feeds or otherwise trying to digest massive amounts of information will trust the output purely out of convenience.

More fundamentally, the question of whether a news story or journal article is AI-generated or not is distinct from whether that content is fact-based or credible. To truly enhance trust and support in evidence-based decisions, the public (and our regulatory agencies) needs an audit-trail back to underlying data sources, methodologies, and prompts. We need to be able to answer questions like: How was the conclusion arrived at? How was the diagnosis made?

Despite its well-known flaws, the centuries-old scientific method, and its counterparts across law, medicine, and journalism, is the best approach humanity has found to arrive at testable, reliable and revisable conclusions and predictions about the world. We observe, hypothesize, test, analyze, report, and repeat our way to a truer understanding of the world and more effective solutions for how to improve it.

Decision making in modern, democratic society is underpinned by this method. Tools such as peer review in scientific journals and fact-checking ensure meritocracy, reliability, and self-correction. Randomized controlled trials ensure effectiveness; jurisprudence takes legal precedents into account. Also built into the scientific method is humility about the limitations of what is knowable by a given means at a given point in time, and honesty about the confidence we can place in any conclusion based on how it was arrived at.

An answer generated by an AI chatbot that is trained to sound authoritative but has no actual observed, experienced, or measured model of the world to align with and is unable to cite its sources or explain how it used those sources violates these principles and standards. If you havent yet experienced an AI hallucination, just ask a chatbot to create a bio of you. It is likely to attribute work to you that you had no hand in, and cities of residence where you never lived.

There is also an important historical relationship between how we know and how we govern. It can be argued that the reason and logic that defined the Scientific Revolution in the 16th and 17th centuries was also the foundation for democratic thought in Europe, and later, the Declaration of Independence. At this already-perilous moment for democracy around the world, we should at least ponder this link.

Some might argue that letting generative AI technologies run unchecked is the right thing in the name of technological progress; the path to artificial general intelligence may produce breakthroughs that reveal deeper truths about the universe or better solutions to the worlds challenges. But that should be societys assessment to make not a handful of corporations before these technologies are more widely deployed.

We must build trust and transparency into any AI system that is intended to support decision making. We could train AI systems on source material that adheres to societys highest standards of trust, such as peer-reviewed scientific literature, corrected for retractions. We could design them to extract facts and findings about the world from reliable source material and use them exclusively to generate answers. We could require that they cite their sources and show their work, and be honest about their limitations and bias, reflecting uncertainty back to the user. Efforts are already underway to build these mechanisms into AI, with the hope they can actually level up societys expectations for transparency and accountability.

Evidence-based decision making should immediately become a principle of nascent international AI governance efforts, especially as countries with diverse models of governance introduce AI regulations. Appropriate governance need not compromise scientific and technological progress.

We should also keep in mind that the methods and legitimacy of science have been and continue to be appropriated for scientific racism. As we consider how decisions are made in both the private and public sectors from those about hiring and college admissions to government policies we must consider the sources we base them on. Modern society is full of historical bias, discrimination, and subjugation. AI should be used to shine awareness on these inequities not calcify them further into the training data of automated and impenetrable decisions for decades to come.

We have a once-in-a-century opportunity to define, collectively, a more rational, explainable, systemic, inclusive, and equitable basis for decision making powered by AI. Perhaps we can even chart a future in which AI helps inoculate humanity against our own fallibility, gullibility, and bias in the interest of a fairer society and healthier public sphere.

Lets not waste this moment.

Adam Bly is the founder and CEO of System. He was formerly vice president of data at Spotify and a visiting fellow in science, technology, and society at Harvard Kennedy School. Amy Brand is director and publisher of the MIT Press. Send comments to magazine@globe.com.

See the rest here:

AI is muddying the truth. Weve known how to fix it for centuries. - The Boston Globe

Related Posts

Comments are closed.