What is the AI Act, and why should you care about it?
About: Licenseware specialises in license analysis automation and development to get licensing insights, from any data-source, in minutes. Find out more about delivering better services on our website 👈
Last week I was invited, along with several other Romanian startups, to the EU Parliament to hear about the new compliance regulations for high-risk AI, along with other topics like the regulatory sandboxes (more on that in this post), the Data Act and other ways in which the EU is addressing the fast-paced technology innovations affecting all our lives.
When people think of AI, there are generally two opposite poles, those who imagine a dystopic future where machines have taken over and those who envision human-like robotic butlers and self-driving hoverboards. And while our phones are still struggling to distinguish between a cat and a dog, the world of AI is big enough to encompass both the dangers of the first and the opportunity for the latter.
Why do we need regulations?
As any data scientist will tell you, your model is only as good as your training data. We live in a world full of biases, and it’s hard to explain to a machine that although historically, humanity has been horrible at things like respecting people’s rights, we don’t want to propagate that behavior.
Using raw, unbalanced, but “accurate” data on a model used for profiling criminals would be the tech equivalent of saying, “stereotypes are stereotypes for a reason, so I’ll just judge you based on them”.
The world’s data is biased because humanity has a very dark past. Just like in the sci-fi movies where the hero must convince the aliens or the death robots that humans are more than just their history, we also need to train our models to understand that just because something happened in the past, it may not always be the correct answer for the future. Suppose we, as empathic human beings, sometimes have difficulty identifying when our judgment is biased. How can we expect an algorithm to perform better if it only relies on historical data?
And that’s why we need regulations because the world is biased, and our models will be too. And it’s up to regulating authorities to ensure that one person is not judged by the actions of a broader, unbalanced, and unregulated group.
Furthermore, AI applications are weaved into our everyday life, from self-driving cars and financial applications to intelligent refrigerators, their impact can be felt many times as an improvement, and it’s already a thing of the present that some applications present real dangers to both their users and the people around.
The AI Act
Although it hasn’t yet been released in its final form, the AI Act distinguishes between high and low-risk applications and plans to introduce compliance rules for those classified as high-risk. Think electric cars, law enforcement profiling, and medical applications, to name a few.
Once released, companies selling AI-driven products categorized as high-risk will need to comply with EU regulations. The regulations and certification process are not yet published, however here are some of the already known objectives:
– ensure that AI systems are safe and respect existing laws on fundamental rights and Union values
– data protection, consumer protection, non-discrimination, and gender equality are respected
– systems used as safety components of various products are thoroughly tested
– the regulatory measures will create a level playing field where small companies can innovate and compete with large corporations
– regulated but open access to data for all developers
– the compliance rules are only aimed at applications classified as high risk
Once the regulations are enacted, existing products will have two years to comply.
A critical aspect of the AI act is represented by the regulatory sandboxes. Systems put in place at the national level, based on the EU’s specifications, will provide developers with the required datasets and tools for self-certification. It’s not very clear what the sandboxes will look like. Because they will be implemented by each member country individually, there is a chance we will see a lot of variation in both their requirements and data, not to mention the speed with which these will be made available. We can also expect to see private companies building and providing sandboxes as a service, which would be beneficial from a speed perspective. However, the cost of accessing such a service may become a barrier for small developers.
In conclusion, the AI Act is a welcomed piece of legislation that will provide some order in an environment that already has made victims, and which will hopefully not hinder innovation but rather ensure that technology is used to improve people’s lives.
If you find our articles useful, register for our monthly newsletter for regular industry insights 👇