Artificial Intelligence Act in the European Union (EU): Risks and regulations – MediaNama.com

The European Commission proposedthe Artificial Intelligence Act (AI Act) last April, after over two years of public consultations. The Act lays down a uniform legal framework [across the EU] for the development, marketing and use of artificial intelligence in conformity with Union values. These values are indicated by democracy, freedom, and equality.

The Act uses a risk-based regulatory approach to all AI systems providers in the EU irrespective of whether they are established within the Union or in a third country. It prohibits certain kinds of AI, places higher regulatory scrutiny on High Risk AI, and limits the use of certain kinds of surveillance technologies, among other objectives.To implement the regulations, the Act establishes the formation of a Union-level European Artificial Intelligence Board. Individual Member States are to direct one or more national competent authorities to implement the Act.

The Act was introduced amid growing recognition of the usefulness of AI in the EUfor example investing in AI and promoting its use can provide businesses with competitive advantages that support socially and environmentally beneficial outcomes.However, it also appears cognizant of the many risks associated with AIwhich can harm protected fundamental rights as well as the public interest. The Act states that it is an attempt to strike a proportionate balance between supporting AI innovation and economic and technological growth, and protecting the rights and interests of EU citizens. Ultimately, the legislation aims to establish a legal framework for trustworthy AI in Europe that helps instil consumer confidence in the technology.

Never miss out on important developments in tech policy, whether in India or across the world. Sign up for our morning newsletter, with a Free Read of the Day, to experience MediaNama in a whole new way.

Why it matters: Described by MIT Technology Review as the most important AI law youve never heard of, commentators suggest that if passed, the Act could once again shape the contours of global technology regulation according to European values. The European Unions (EU) General Data Protection Regulation (GDPR) is already an inspiration for data protection laws in multiple countriesa success story for the EUs brand of Internet regulation that the AI Act explicitly seeks to replicate amid geopolitical rifts in cyber governance. However, some commentators believe the Acts arbitrarilydefinedrisks may stifle innovation by batting so heavily for civil libertiesif not, the Act may prohibitively raise compliance costs for companies seeking to do business with the EU. Additionally, the proposed Act reportedly complements the GDPR, other IT laws in the Union, and various EU charters on fundamental rightsa relatively harmonious regulatory approach that may be useful to India as it negotiates IT legislation and harms across a battery of emerging sectors.

Article 3 of the AI Actdefines AI as any software that is developed with one or more of the techniques and approaches listed in Annex I and can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with.

This definition is intended to be technology neutral and future proofwhich means that it hopes to be broad enough to counter new uses of AI in the coming years.

Protecting citizen rights and freedoms is critical, as the Act notes. However, doing so should not outright hinder how all AI is used across the EUafter all, some AI systems demand higher levels of scrutiny than others. The Acts approach centres around maintaining regulatory proportionality.

What this means: it deploys a risk-based regulatory approach that casts restrictions and transparency obligations on AI systems based on their potential to cause harm. This, it hopes, will limit regulatory oversight to only sensitive AI systemsresulting in fewer restrictions on the trade and use of AI within the single market. Two types of AI systems are largely discussed in the Act: Prohibited and High Risk AI systems.

Unacceptable or Prohibited AI Systems: The Act prohibits the use of certain types of AI for the unacceptable risks they pose. These systems can be used for manipulative, exploitative and social control practices. They would violate Union values of freedom, equality and democracy, among others. They would also violate Fundamental Rights in the EU, including rights to non-discrimination and privacy, as well as the rights of a child.

What harms do these systems pose?: For example, AI systems that distort human behaviour may cause psychological harm through subliminal actions that humans cannot perceive. AI social scoring systems (parallels of which are seen in China) may discriminate against individuals or social groups based on data that is devoid of context. Facial Recognition Technologies used by law enforcement agencies are also considered violations of the right to privacy and should be prohibitedexcept in three narrowly defined scenarios where protecting the public interest outweighs the risks of the AI system. These include when searching for the victims of a crime, when investigating terrorist threats or threats to a persons life and safety, or the detection, localisation, identification or prosecution of the perpetrators of specific crimes in the EU.

High Risk AI Systems: High Risk AI systems are those which may significantly harm either the safety, health, or fundamental rights of people in the EU.These systems are often incorporated into larger human-operated services.

What harms do these systems pose?: Examples include autonomous robots performing complex tasks (such as in the automotive industry). In the education sector, testing systems powered by AI could perpetuate discriminatory and stigmatising attitudes toward specific students, affecting their education and livelihood. The same is the case for AI systems determining credit worthinessgiven that they can shape who has access to financial resources.

How are they regulated?:High Risk systems are not as concerning as Unacceptable systems in the Actbut they still face stronger regulatory scrutiny and can only be placed on the Union market or put into service if they comply with certain mandatory requirements. To develop a high level of trustworthiness of high-risk AI systems [among consumers], these systems have to pass a conformity assessment before entering the market, to ensure they meet these uniform standards.

Some ring-fencing initiatives that systems providers must comply with include ensuring that only high-quality data sets are used to power AI systemsto avoid errors and discrimination. Systems providers should also keep detailed records on how the AI system functions to ensure compliance with the Act. To inform users of potential risks better, High Risk systems should be accompanied by relevant documentation and instructions of use and include concise and clear information, including in relation to possible risks to fundamental rights and discrimination. They should be designed such that human beings can oversee their functioning, as well as be resilient to malicious cyber attacks that attempt to change their behaviours (leading to new harms). In certain cases, users should also be notified that they are interacting with an AI system. The proposal suggests that by 2025, compliance costs for suppliers of an average High Risk AI system worth 170,000 could range between 6,000-7,000.

In order to foster innovation, the Act encourages EU Member States to develop artificial intelligence regulatory sandboxeswhere research can be conducted on these technologies under strict supervision before they reach the market.

Non-High Risk AI Systems: Some AI systems may not induce harms as significant as those above. In this case, they could be assumed to be every AI system that is not Prohibited or High Risk. While the Acts provisions dont apply to these simpler systems, it encourages them to comply with them to improve public trust in these systems. The Act has little else to say on these systems.

In many ways, the Actre-emphasises the importance of harmonised business and trade across the EUs single marketas well as Brussels dominance in shaping overarching laws for the bloc. The language of the Act is categorically wary of Member State-level legislation on regulating AIreiterating that conflicting legislation will only complicate the protection of fundamental rights and ease of doing business in the EU. Thats why the Act positions itself as one that harmonises European values across Member States.

That being said, the language of the Act balances domestic interests with extra-territorial ambition. While it seeks to achieve the above objectives, it repeatedly speaks of the Acts potential to shape global regulation on AI, in line with European values. This is not an unfounded hope for a bloc now known to steer technology laws.

Such outward-looking planks can also be read against a growing discourse in global cyber governancewhere debatable dichotomies are drawn by States between the relatively free Internet of democracies, and the walled Internet of China.

While acknowledging the legitimate concerns of algorithmic biases and profiling, some commentators note that the Acts compliance requirements for High Risk AI Systems providers may be impossible to meet. For example, AI systems make use of massive data setsensuring that they are error-free may be a tall order. Additionally, it may not always be possible for a systems operator to fully comprehend how AI worksespecially given the increasing complexity of the technology. If these mechanisms cannot be entirely deciphered, then estimating their potential harms also becomes difficult. Others add that the scope of what constitutes High Risk AI is simply too wideand may stifle innovation due to exorbitant compliance costs.

Additionally, countries like France oppose prohibiting the use of Facial Recognition Technology, while Germany approves an all-out ban on its use in public spaces. Further deliberations and potential amendments may be the only way out of this intra-EU stalemate.

A report by the UK-based Ada Lovelace Institute further argues that the Act mistakenly conceives AI systems to be a final product. Instead, they are systems delivered dynamically through multiple hands,which means that they impact people not just at the implementation stage, but before that as well. The Act doesnt account for this life cycle of AI. Additionally, it focuses entirely on the risk-based approach, with little isolated discussion of the role played by citizens consuming these services. The report argues that this approach is incompatible with legislation concerned with Fundamental Rights. The report further describes the perceived risks of AI as arbitrary, calling for an assessment of these systems based on reviewable criteria. Finally, while the Act spends much time on reviewing the risks of prohibited and High Risk AI, it fails to review the risks of all AI services at large.

EU Member States are currently proposing changes to the Actwhether these deficiencies will be addressed, and when, remains to be seen.

This post is released under aCC-BY-SA 4.0 license. Please feel free to republish on your site, with attribution and a link. Adaptation and rewriting, though allowed, should be true to the original.

Read More

Excerpt from:
Artificial Intelligence Act in the European Union (EU): Risks and regulations - MediaNama.com

Related Posts

Comments are closed.