Building trust in artificial intelligence: lessons from the EU AI Act | The Strategist – The Strategist

Artificial intelligence will radically transform our societies and economies in the next few years. The worlds democracies, together, have a duty to minimise the risks this new technology poses through smart regulation, without standing in the way of the many benefits it will bring to peoples lives.

There is strong momentum for AI regulation in Australia, following its adoption of a government strategy and a national set of AI ethics. Just as Australia begins to define its regulatory approach, the European Union has reached political agreement on the EU AI Act, the worlds first and most comprehensive legal framework on AI. That provides Australia with an opportunity to reap the benefits from the EUs experiences.

The EU embraces the idea that AI will bring many positive changes. It will improve the quality and cost-efficiency of our healthcare sector, allowing treatments that are tailored to individual needs. It can make our roads safer and prevent millions of casualties from traffic accidents. It can significantly improve the quality of our harvests, reducing the use of pesticides and fertiliser, and so help feed the world. Last but not least, it can help fight climate change, reducing waste and making our energy systems more sustainable.

But the use of AI isnt without risks, including risks arising from the opacity and complexity of AI systems and from intentional manipulation. Bad actors are eager to get their hands on AI tools to launch sophisticated disinformation campaigns, unleash cyberattacks and step up their fraudulent activities.

Surveys, including some conducted in Australia, show that many people dont fully trust AI. How do we ensure that the AI systems entering our markets are trustworthy?

The EU doesnt believe that it can leave responsible AI wholly to the market. It also rejects the other extreme, the autocratic approach in countries like China of banning AI models that dont endorse government policies. The EUs answer is to protect users and bring trust and predictability to the market through targeted product-safety regulation, focusing primarily on the high-risk applications of AI technologies and powerful general-purpose AI models.

The EUs experience with its legislative process offers five key lessons to approaching AI governance.

First, any regulatory measures must focus on ensuring that AI systems are safe and human-centric before they can be used. To generates the necessary trust, AI systems must be checked for core principles such as non-discrimination, transparency and explainability. AI developers must train their systems on adequate datasets, maintain risk-management systems and provide for technical measures for human oversight. Automated decisions must be explainable; arbitrary black box decisions are unacceptable. Deployers must also be transparent and inform users when an AI system generates content such as deepfakes.

Second, rules should focus not on the AI technology itselfwhich develops at lightning speedbut on governing its use. Focusing on use casesfor example, in health care, finance, recruitment or the justice systemensures that regulations are future-proof and dont lag behind rapidly evolving AI technologies.

The third lesson is to follow a risk-based approach. Think of AI regulation as a pyramid, with different levels of risk. In most cases, the use of AI poses no or only minimal risksfor example, when receiving music recommendations or relying on navigation apps. For such uses, no or soft rules should apply.

However, in a limited number of situations where AI is used, decisions can have material effects on peoples livesfor example, when AI makes recruitment decisions or decides on mortgage qualifications. In these cases, stricter requirements should apply, and AI systems must be checked for safety before they can be used, as well as monitored after theyre deployed. Some uses that pose unacceptable risks to democratic values, such as social scoring systems, should be banned completely.

Specific attention should be given to general-purpose AI models, such as GPT-4, Claude and Gemini. Given their potential for downstream use for a wide variety of tasks, these models should be subject to transparency requirements. Under the EU AI Act, general-purpose AI models will be subject to a tiered approach. All models will be required to provide technical documentation and information on the data used to train them. The most advanced models, which can pose systemic risks to society, will be subject to stricter requirements, including model evaluations (red-teaming), risk identification and mitigation measures, adverse event reporting and adequate cybersecurity protection.

Fourth, enforcement should be effective but not burdensome. The act aligns with the EUs longstanding product-safety approach: certain risky systems need to be assessed before being put on the market, to protect the public. The act classifies AI systems into the high-risk category if they are used in products covered by existing product-safety legislation, and when they are used in certain critical areas, including employment and education. Providers of these systems must ensure that their systems and governance practices conform to regulatory requirements. Designated authorities will oversee providers conformity assessments and take action on non-compliant providers. For the most advanced general-purpose AI models, the new regulation establishes an EU AI Office to ensure efficient, centralised oversight of the models posing systemic risks to society.

Lastly, developers of AI systems should be held to account when those systems cause harm. The EU is currently updating its liability rules to make it easier for those who have suffered damages from AI systems to bring claims and obtain reliefsurely prompting developers to exercise even greater due diligence before putting AI into the market.

The EU believes an approach built around these five key tenets is balanced and effective. However, while the EU may be the first democracy to establish a comprehensive framework, we need a global approach to be truly effective. For this reason, the EU is also active in international forums, contributing to the progress made, for example, in the G7 and the OECD. To ensure effective compliance, though, we need binding rules. Working closely together as like-minded countries will enable us to shape an international approach to AI that is consistent withand based onour shared democratic values.

The EU supports Australias promising efforts to put in place a robust regulatory framework. Together, Australia and the EU can promote a global standard for AI governancea standard that boosts innovation, builds public trust and safeguards fundamental rights.

Read this article:
Building trust in artificial intelligence: lessons from the EU AI Act | The Strategist - The Strategist

Related Posts

Comments are closed.