As regulators talk tough, tackling AI bias has never been more urgent – VentureBeat

Head over to our on-demand library to view sessions from VB Transform 2023. Register Here

The rise of powerful generative AI tools like ChatGPT has been described as this generations iPhone moment. In March, the OpenAI website, which lets visitors try ChatGPT, reportedly reached847 million unique monthly visitors. Amid this explosion of popularity, the level of scrutiny placed on gen AI has skyrocketed, with several countries acting swiftly to protect consumers.

In April, Italy became the first Western country toblockChatGPT on privacy grounds, only to reverse the ban four weeks later. Other G7 countries areconsidering a coordinated approachto regulation.

The UK will host thefirst global AI regulation summitin the fall, with Prime Minister Rishi Sunak hoping the country can drive the establishment of guardrails on AI. Itsstated aimis to ensure AI is developed and adopted safely and responsibly.

Regulation is no doubt well-intentioned. Clearly, many countries are aware of the risks posed by gen AI. Yet all this talk of safety is arguably masking a deeper issue: AI bias.

VB Transform 2023 On-Demand

Did you miss a session from VB Transform 2023? Register to access the on-demand library for all of our featured sessions.

Although the term AI bias can sound nebulous, its easy to define. Also known as algorithm bias, AI bias occurs when human biases creep into the data sets on which the AI models are trained. This data, and the subsequent AI models, then reflect any sampling bias, confirmation bias and human biases (against gender, age, nationality, race, for example) and clouds the independence and accuracy of any output from the AI technology.

As gen AI becomes more sophisticated, impacting society in ways it hadnt before, dealing with AI bias is more urgent than ever. This technology isincreasingly usedto inform tasks like face recognition, credit scoring and crime risk assessment. Clearly, accuracy is paramount with such sensitive outcomes at play.

Examples of AI bias have already been observed in numerous cases. When OpenAIs Dall-E 2, a deep learning model used to create artwork, wasasked to create an imageof a Fortune 500 tech founder, the pictures it supplied were mostly white and male. When asked if well-known Blues singer Bessie Smith influenced gospel singer Mahalia Jackson, ChatGPTcould not answer the question without further prompts, raising doubts about its knowledge of people of color in popular culture.

Astudyconducted in 2021 around mortgage loans discovered that AI models designed to determine approval or rejection did not offer reliable suggestions for loans to minority applicants.These instances prove that AI bias can misrepresent race and gender with potentially serious consequences for users.

AI that produces offensive results can be attributed to the way the AI learns and the dataset it is built upon. If the data over-represents or under-represents a particular population, the AI will repeat that bias, generating even more biased data.

For this reason, its important that any regulation enforced by governments doesnt view AI as inherently dangerous. Rather, any danger it possesses is largely a function of the data its trained on. If businesses want to capitalize on AIs potential, they must ensure the data it is trained on is reliable and inclusive.

To do this, greater access to an organizations data to all stakeholders, both internal and external, should be a priority. Modern databases play a huge role here as they have the ability to manage vast amounts of user data, both structured and semi-structured, and have capabilities to quickly discover, react, redact and remodel the data once any bias is discovered. This greater visibility and manageability over large datasets means biased data is at less risk of creeping in undetected.

Furthermore, organizations must train data scientists to better curate data while implementing best practices for collecting and scrubbing data. Taking this a step further, the data training algorithms must be made open and available to as many data scientists as possible to ensure that more diverse groups of people are sampling it and can point out inherent biases. In the same way modern software is often open source, so too should appropriate data be.

Organizations have to be constantly vigilant and appreciate that this is not a one-time action to complete before going into production with a product or a service. The ongoing challenge of AI bias calls for enterprises to look at incorporating techniques that are used in other industries to ensure general best practices.

Blind tasting tests borrowed from the food and drink industry, red team/blue team tactics from the cybersecurity world or the traceability concept used in nuclear power could all provide valuable frameworks for organizations in tackling AI bias. This work will help enterprises to understand the AI models, evaluate the range of possible future outcomes and gain sufficient trust with these complex and evolving systems.

In previous decades, talk of regulating AI was arguably putting the cart before the horse. How can you regulate something whose impact on society is unclear? A century ago, no one dreamt of regulating smoking because it wasnt known to be dangerous. AI, by the same token, wasnt something under serious threat of regulation any sense of its danger was reduced tosci-fi filmswith no basis in reality.

But advances in gen AI and ChatGPT, as well as advances towards artificial general Intelligence (AGI), have changed all that. Some national governments seem to be working in unison to regulate AI, while paradoxically, others are jockeying for position as AI regulators-in-chief.

Amid this hubbub, its crucial that AI bias doesnt become overly politicized and is instead viewed as a societal issue that transcends political stripes. Across the world, governments alongside data scientists, businesses and academics must unite to tackle it.

Ravi Mayuram is CTO of Couchbase.

Welcome to the VentureBeat community!

DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.

If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.

You might even considercontributing an articleof your own!

Read More From DataDecisionMakers

Read the rest here:

As regulators talk tough, tackling AI bias has never been more urgent - VentureBeat

Related Posts

Comments are closed.