Artificial Intelligence White Paper: What Are The Practical Implications? – Mondaq News Alerts

To print this article, all you need is to be registered or login on Mondaq.com.

On 19 February 2020, the European Commission published a WhitePaper, "On Artificial Intelligence: A European approach toexcellence and trust". The purpose of this White Paper onartificial intelligence (AI), of which leaks began circulatingalready in January 2020, is to discuss policy options on how toachieve two objectives: (i) promoting the uptake of AI and (ii)addressing the risks associated with certain uses of AI.

Europe aspires to become a "global leader in innovation inthe data economy and its applications", and would like todevelop an AI ecosystem that brings the benefits of that technologyto citizens, business and public interest.

The European Commission identifies two key components that willallow such an AI ecosystem to develop in a way that benefits EUsociety as a whole: excellence and trust, and it highlights theEU's "Ethics Guidelines for Trustworthy ArtificialIntelligence" of April 2019 as a core element that isrelevant for both of those components.

Like with many White Papers, however, the practical implicationsappear far off in the future. We have therefore included a fewnotes ("Did you know?") withadditional information to illustrate them or show what alreadyexists, and conclude with some guidance on what you can already dotoday.

The European Commission identifies several key aspects that willhelp create an ecosystem of excellence in relation to artificialintelligence:

Where AI is developed and deployed, it must address concernsthat citizens might have in relation to e.g. unintended effects,malicious use, lack of transparency. In other words, it must betrustworthy.In this respect, the White Paper refers to the (non-binding) EthicsGuidelines, and in particular the seven key requirements for AIthat were identified in those guidelines:

Yet this is no legal framework.

a) Existing laws & AI

There is today no specific legal framework aimed at regulatingAI. However, AI solutions are subject to a range of laws, as withany other product or solution: legislation on fundamental rights(e.g. data protection, privacy, non-discrimination), consumerprotection, product safety and liability rules.[Did you know? AI-powered chatbots used forcustomer support are not rocket-science in legal terms, but theanswers they provide are deemed to stem from the organisation, andcan thus make the organisation liable. Because such a chatbot needsinitial data to understand how to respond, organisations typically"feed" them previous real-life customer support chats andtelephone exchanges, but the use of those chats and conversationsis subject to data protection rules and rules on the secrecy ofelectronic communications.]

According to the European Commission, however, the currentlegislation may sometimes be difficult to enforce in relation to AIsolutions, for instance because of the AI's opaqueness(so-called "black box-effect"), complexity,unpredictability and partially autonomous behaviour. As such, theWhite Paper highlights the need to examine whether any legislativeadaptations or even new laws are required.

The main risks identified by the European Commission are (i)risks for fundamental rights (in particular data protection, due tothe large amounts of data being processed, and non-discrimination,due to bias within the AI) and (ii) risks for safety and theeffective functioning of the liability regime. On the latter, theWhite Paper highlights safety risks, such as an accident that anautonomous car might cause by wrongly identifying an object on theroad. According to the European Commission, "[a] lack ofclear safety provisions tackling these risks may, in addition torisks for the individuals concerned, create legal uncertainty forbusinesses that are marketing their products involving AI in theEU".[Did you know? Data protection rules do notprohibit e.g. AI-powered decision processes or data collection formachine learning, but certain safeguards must be taken into account and it's easier to do so at the design stage.]

The European Commission recommends examining how legislation canbe improved to take into account these risks and to ensureeffective application and enforcement, despite AI's opaqueness.It also suggests that it may be necessary to examine andre-evaluate existing limitations of scope of legislation (e.g.general EU safety legislation only applies to products, notservices), the allocation of responsibilities between differentoperators in the supply chain, the very concept of safety, etc.

b) A future regulatory framework for AI

The White Paper includes lengthy considerations on what a newregulatory framework for AI might look like, from its scope (thedefinition of "AI") to its impact. A key elementhighlighted is the need for a risk-based approach (as in the GDPR),notably in order not to create a disproportionate burden,especially for SMEs. Such a risk-based approach, however, requiressolid criteria to be able to distinguish high-risk AI solutionsfrom others, which might be subject to fewer requirements.According to the European Commission, an AI application should beconsidered high-risk where it meets the following twocumulative criteria:

Yet the White Paper immediately lists certain exceptions thatwould irrespective of the sector be "high-risk", statingthat this would be relevant for certain "exceptionalinstances".In the absence of actual legislative proposals, the merit of thisprinciple-exception combination is difficult to judge. However, itwould not surprise us to see a broader sector-independent criterionfor "high-risk" AI solutions appear situationsthat are high-risk irrespective of the sector due to their impacton individuals or organisations.

Those high-risk AI solutions would then likely be subject tospecific requirements in relation to the following topics:

In practice, these requirements would cover a range of aspectsof the development and deployment cycle of an AI solution, and therequirements are therefore not meant solely for the developer orthe person deploying the solution. Instead, according to theEuropean Commission, "each obligation should be addressedto the actor(s) who is (are) best placed to address any potentialrisk". The question of liability might still be dealtwith differently under EU product liability law,"liability for defective products is attributed to theproducer, without prejudice to national laws which may also allowrecovery from other parties".

Because the aim would be to impose such requirements on"high-risk" AI solutions, the European Commissionanticipates that a prior conformity assessmentwill be required, which could include procedures for testing,inspection or certification and checks of the algorithms and of thedata sets used in the development phase. Some requirements (e.g.information to be provided) might not be included in such priorconformity assessment. Moreover, depending on the nature of the AIsolution (e.g. if it evolves and learns from experience), it may benecessary to carry out repeated assessments throughout the lifetimeof the AI solution.The European Commission also wishes to open up the possibility forAI solutions that are not "high-risk" to benefit fromvoluntary labelling, to show voluntary compliance with (some or allof) those requirements.

The White Paper sets out ambitious objectives, but also gives anidea of the direction in which the legal framework applicable to AImight evolve in the coming years.

We do feel it is important to stress that this future frameworkshould not be viewed as blocking innovation. Too many organisationshave the impression already that the GDPR prevents them fromprocessing data, when it is precisely a tool that allows better andmore responsible processing of personal data. The frameworkdescribed by the European Commission in relation to AI appears tobe similar in terms of its aim: these rules would helporganisations build better AI solutions or use AI solutions moreresponsibly.

In this context, organisations working today on AI solutionswould do well to consider building the recommendations of the WhitePaper already into their solutions. While there is no legalrequirement to do so now, anticipating those requirements mightgive those organisations a frontrunner status and a competitiveedge when the rules materialise.

The content of this article is intended to provide a generalguide to the subject matter. Specialist advice should be soughtabout your specific circumstances.

Link:
Artificial Intelligence White Paper: What Are The Practical Implications? - Mondaq News Alerts

Related Posts

Comments are closed.