Regulation of Artificial Intelligence in Europe – What’s in the pipeline? – Lexology

Shortly after her inauguration, the new European Commissions president, Ursula von der Leyen, expressed the intent of the new commission to come up with a Digital Strategy addressing Artificial Intelligence regulation within 100 days from beginning of her presidency. Keeping this promise, the European Commission published a first White Paper on Artificial Intelligence A European approach to excellence and trust in February 2020. Statements on such White Paper were collected until 31 May 2020. Together with the White Paper the Commission Report on safety and liability implications of AI, the Internet of Things and Robotics was published, providing more details on the gaps the Commission has identified in existing laws.

What are we talking about?

A definition of Artificial Intelligence was provided by the European Commissions AI High Level Expert Group (AI HLEG) on 8 April 2019, when this group provided Ethics guidelines for trustworthy AI. The Commissions papers are referring to AI HLEG, which defines AI as follows:

Artificial intelligence (AI) systems are software (and possibility also hardware) systems

These defining aspect of AI form the basis of an evaluation of legal gaps and possible requirements for new regulation.

Where are the legal gaps?

The Commission Report on safety and liability implications of AI, the Internet of Things and Robotics identified legal gaps mainly in the following respect:

The European Parliament published a further Report on Intellectual property rights for the development of artificial intelligence technologies, evaluating the status quo and identifying various gaps in IP law. The report found, for example, gaps in respect to the question whether AI is or can be protected by IP, whether IP protected content can be food for AI training and whether someone owns rights to works created by AI.

What kind of regulation do we have to expect?

Upon identification of the various gaps, the EU intends to issue a comprehensive legislative package on AI, which will include new regulations for those who build and deploy AI. First hints on what could be part of such package can be taken from three resolutions the European parliament adopted on 20 October: the Framework of ethical aspects of artificial intelligence, robotics and related technologies; the Civil liability regime for artificial intelligence and the Intellectual property rights for the development of artificial intelligence technologies.

Looking at these Resolution, the following topics might be seen as key to build an ecosystem of trust and enhance the general social acceptance of AI:

When?

A first draft of such new AI legal framework is expected for the first quarter of 2021, whereas some parts could already be reflected in the Digital Services Act, for which a first draft is already expected in December 2020.

Who?

Addressee of such obligations might not always be the software developer, but could be the actor who is best placed to address potential risks. Obligations could therefore also be imposed on the deployer or the service provider. And the obligations will have to be obeyed by all economic operators providing AI-enabled products or services in the EU, regardless of whether they are established in the EU or not.

What to do?

Anyone engaged or interested in AI should monitor closely the developments of the upcoming months, as the new laws will most certainly have impact on AI systems that are currently trained or are even already on the market.

In practice, contractual frameworks for AI systems should therefore already be examined and might include clauses anticipating future developments. Certain upcoming liability risks might also already be taken into account.

Read more:
Regulation of Artificial Intelligence in Europe - What's in the pipeline? - Lexology

Related Posts

Comments are closed.