Artificial Intelligence’s Struggles with Accuracy and the Potential … – Fagen wasanni

Artificial intelligence (AI) has been making notable strides in various fields, but its struggles with accuracy are well-documented. The technology has produced falsehoods and fabrications, ranging from fake legal decisions to pseudoscientific papers and even sham historical images. While these inaccuracies are often minimal and easily disproven, there are instances where AI creates and spreads fiction about specific individuals, threatening their reputations with limited options for protection or recourse.

One example is Marietje Schaake, a Dutch politician and international policy director at Stanford University. When a colleague used BlenderBot 3, a conversational AI developed by Meta, to ask who a terrorist is, the AI incorrectly responded by identifying Schaake as a terrorist. Schaake, who has never engaged in any illegal or violent activities, expressed concerns about how others with less agency to prove their identities could be negatively affected by such false information.

Similarly, OpenAIs ChatGPT chatbot linked a legal scholar to a non-existent sexual harassment claim, leading to reputational damage. High school students in New York created a deepfake video of a local principal, raising concerns about AIs potential to spread false information about individuals sexual orientation or job candidacy.

While some adjustments have been made to improve AI accuracy, the problems persist. Meta, for instance, later acknowledged that BlenderBot had combined unrelated information to incorrectly classify Schaake as a terrorist and closed the project in June.

Legal precedent surrounding AI is limited, but individuals are starting to take legal action against AI companies. In one case, an aerospace professor filed a defamation lawsuit against Microsoft, as the companys Bing chatbot wrongly conflated his biography with that of a convicted terrorist. OpenAI also faced a libel lawsuit from a radio host in Georgia due to false accusations made by ChatGPT.

The inaccuracies in AI arise partly due to a lack of information available online and the technologys reliance on statistical pattern prediction. Consequently, chatbots may generate false biographical details or mash up identities, a phenomenon referred to as Frankenpeople by some researchers.

To mitigate accidental inaccuracies, Microsoft and OpenAI employ content filtering, abuse detection, and other tools. These companies also encourage users to provide feedback and not rely solely on AI-generated content. They aim to enhance AIs fact-checking capabilities and develop mechanisms for recognizing and correcting inaccurate responses.

Furthermore, Meta has released its LLaMA 2 AI technology for community feedback and vulnerability identification, emphasizing ongoing efforts to enhance safety and accuracy.

However, AI also has the potential for intentional abuse. Cloned audio, for example, has become a prevalent issue, prompting government warnings against AI-generated voice scams.

As AI continues to evolve, it is crucial to address its limitations and potential harm. Stricter regulations and safeguards are necessary to prevent the spread of false information and protect individuals from reputational damage.

Link:
Artificial Intelligence's Struggles with Accuracy and the Potential ... - Fagen wasanni

Related Posts

Comments are closed.