Uncovering the EU AI Act. The EU has moved to regulate machine | by Stephanie Kirmer | Mar, 2024 – Towards Data Science

The EU has moved to regulate machine learning. What does this new law mean for data scientists? Photo by Hansjrg Keller on Unsplash

The EU AI Act just passed the European Parliament. You might think, Im not in the EU, whatever, but trust me, this is actually more important to data scientists and individuals around the world than you might think. The EU AI Act is a major move to regulate and manage the use of certain machine learning models in the EU or that affect EU citizens, and it contains some strict rules and serious penalties for violation.

This law has a lot of discussion about risk, and this means risk to the health, safety, and fundamental rights of EU citizens. Its not just the risk of some kind of theoretical AI apocalypse, its about the day to day risk that real peoples lives are made worse in some way by the model youre building or the product youre selling. If youre familiar with many debates about AI ethics today, this should sound familiar. Embedded discrimination and violation of peoples rights, as well as harm to peoples health and safety, are serious issues facing the current crop of AI products and companies, and this law is the EUs first effort to protect people.

Regular readers know that I always want AI to be well defined, and am annoyed when its too vague. In this case, the Act defines AI as follows:

A machine-based system designed to operate with varying levels of autonomy that may exhibit adaptiveness after deployment and that, for explicit or implicit objectives, infers from the input it receives, how to generate outputs such as predictions, content, recommendations or decisions that can influence physical or virtual environments.

So, what does this really mean? My interpretation is that machine learning models that produce outputs that are used to influence the world (especially peoples physical or digital conditions) fall under this definition. It doesnt have to adapt live or retrain automatically, although if it does thats covered.

But if youre building ML models that are used to do things like

These will all be covered by this law, if your model effects anyone who is a citizen of the EU and thats just to name a few examples.

All AI is not the same, however, and the law acknowledges that. Certain applications of AI are going to be banned entirely, and others subjected to much higher scrutiny and transparency requirements.

These kinds of systems are now called Unacceptable Risk AI Systems and are simply not allowed. This part of the law is going into effect first, six months from now.

This means, for example, you cant build (or be forced to submit to) a screening that is meant to determine whether youre happy enough to get a retail job. Facial recognition is being restricted to only select, targeted, specific situations. (Clearview AI is definitely an example of that.) Predictive policing, something I worked on in academia early in my career and now very much regret, is out.

The biometric categorization point refers to models that group people using risky or sensitive traits like political, religious, philosophical beliefs, sexual orientation, race, and so on. Using AI to try and label people according to these categories is understandably banned under the law.

This list, on the other hand, covers systems that are not banned, but highly scrutinized. There are specific rules and regulations that will cover all these systems, which are described below.

This is excluding those specific use cases described above. So, emotion-recognition systems might be allowed, but not in the workplace or in education. AI in medical devices and in vehicles are called out as having serious risks or potential risks for health and safety, rightly so, and need to be pursued only with great care.

The other two categories that remain are Low Risk AI Systems and General Purpose AI Models. General Purpose models are things like GPT-4, or Claude, or Gemini systems that have very broad use cases and are usually employed within other downstream products. So, GPT-4 by itself isnt in a high risk or banned category, but the ways you can embed them for use is limited by the other rules described here. You cant use GPT-4 for predictive policing, but GPT-4 can be used for low risk cases.

So, lets say youre working on a high risk AI application, and you want to follow all the rules and get approval to do it. How to begin?

For High Risk AI Systems, youre going to be responsible for the following:

Another thing the law makes note of is that if youre working on building a high risk AI solution, you need to have a way to test it to ensure youre following the guidelines, so there are allowances for testing on regular people once you get informed consent. Those of us from the social sciences will find this pretty familiar its a lot like getting institutional review board approval to run a study.

The law has a staggered implementation:

Note: The law does not cover purely personal, non-professional activities, unless they fall into the prohibited types listed earlier, so your tiny open source side project isnt likely to be a risk.

So, what happens if your company fails to follow the law, and an EU citizen is affected? There are explicit penalties in the law.

If you do one of the prohibited forms of AI described above:

Other violation not included in the prohibited set:

Lying to authorities about any of these things:

Note: For small and medium size businesses, including startups, then the fine is whichever of the numbers is lower, not higher.

If youre building models and products using AI under the definition in the Act, you should first and foremost familiarize yourself with the law and what its requiring. Even if you arent affecting EU citizens today, this is likely to have a major impact on the field and you should be aware of it.

Then, watch out for potential violations in your own business or organization. You have some time to find and remedy issues, but the banned forms of AI take effect first. In large businesses, youre likely going to have a legal team, but dont assume they are going to take care of all this for you. You are the expert on machine learning, and so youre a very important part of how the business can detect and avoid violations. You can use the Compliance Checker tool on the EU AI Act website to help you.

There are many forms of AI in use today at businesses and organizations that are not allowed under this new law. I mentioned Clearview AI above, as well as predictive policing. Emotional testing is also a very real thing that people are subjected to during job interview processes (I invite you to google emotional testing for jobs and see the onslaught of companies offering to sell this service), as well as high volume facial or other biometric collection. Its going to be extremely interesting and important for all of us to follow this and see how enforcement goes, once the law takes full effect.

Visit link:

Uncovering the EU AI Act. The EU has moved to regulate machine | by Stephanie Kirmer | Mar, 2024 - Towards Data Science

Related Posts

Comments are closed.