The European Artificial Intelligence Act Is Here: Everything You Need to Know about the First Ever AI Law
- On August 1, 2024, the European Artificial Intelligence Act finally came into force, 3 years after being proposed.
- Industry leaders have until September 10 to submit their feedback on the rules.
- However, since it's the summer break in the EU, a group of organizations have penned a joint letter to the Commission asking for the deadline's extension.
The European Artificial Intelligence Act came into force on August 1, 2024, and is by far the world's first major milestone in global AI regulation.
In this brief article, I'll tell you all you need to know about its history, objectives, classifications, and how it's set to affect the way businesses work.
BackgroundFirst, let's talk about the origin of the law. It was first proposed by the EU Commission in April 2021 when concerns surrounding the risks of AI had just started to emerge.
The commission felt the need to set a global regulatory framework to govern the way AI is used so that we can foster development without risking the people or any other party.After several rounds of negotiations, the Parliament and the Council ultimately approved the proposal in December 2023. Then began the real work, which was crafting the legislation.
Classification and ObligationsHere are the different parties that will be subjected to the AI law's rules, followed by the obligation that each will have to carry out:
1. Low-Risk AISystems like spam filters and video games are considered low-risk, so the rules are not mandatory for them to follow.However, the developers can still choose to follow the regulations for the sake of maintaining transparency.
2. Medium-Risk AIChatbots and AI-generated content come in this category. They're required to make it clear to the user that they're interacting with AI. Content like deepfakes needs to be explicitly labeled as artificially made.
Related: UK government criminalizes creating sexually explicit deepfakes
3. High-Risk AIApplications in this category need to strictly follow all the rules. They also need to meet strict standards for security, accuracy, and data quality. Plus, there should be continuous human supervision.
Last but not least, they must also maintain records and cooperate with the authorities. Medical and recruitment AI tools come under this category.
4. Banned AISome AI applications are outright banned because the risks far outweigh the benefits. This includes AI toys that can encourage harmful behavior in kids, government social scoring, and biometric systems such as those that can recognize an employee's emotions at work.
5. ExceptionsIt's worth noting that there are certain AI systems that are exempt from these regulations altogether. This includes systems built for national security, military use, or scientific purposes.
Similarly, personal and non-commercial use of AI is also exempted and so are open-source tools - unless they fall under the high-risk or transparency-required categories.
6. Special Rules for General-Purpose AI ModelsGeneral-purpose AI (GPAI) models, such as ChatGPT, will have to follow special rules. They need to provide summaries of the data they use for training, keep technical documentation, and comply with EU Copyright laws.Furthermore, models that are high-risk in this category have additional responsibilities, such as notifying the European Commission, conducting adversarial testing, and ensuring security.
Scope & Applicability of the AI LawThe rules also apply to international organizations. Even if the company is based outside the EU, it will still have to follow the rules if its AI tools are being used within the European Union.It's particularly great that the framework has been designed to cover a wide range of industries and AI activities.
As mentioned above, from low- and high-risk to personal and commercial use, every category has been granted a clear-cut distinction. Some systems such as the GPAIS and those used for military and research purposes also have separate mentions.
Implications of the AI Act on BusinessesRunning a business in the EU is already complicated, given the region's complex web of regulations, which require strict adherence. Now, with the introduction of the European Artificial Intelligence Act, things are about to get even more complicated.
Meeting all the standards set by this act, such as better security, data integrity, and human monitoring, will come at an additional expense for businesses.
Companies, however, will certainly prefer to take on the additional expense of complying with the rules rather than paying a fine of 7% of their global annual turnover - or 35 million, whichever is higher.For other low-level offenses, such as failing to fulfill all the requirements of a high-risk AI system, smaller fines will be imposed. The exact amount of these fines will depend on the severity of the offense.
Keeping the financial burden aside, the new AI act is expected to boost innovation by increasing healthy competition in the market.
Lastly, let's not forget the ethical side of it. Working with AI is a risky affair and we've already seen how malicious actors can exploit it (deepfakes, for example). However, just because there's risk doesn't mean we should pull the plug on AI development, right?
The European Artificial Intelligence Act will ensure that enhancing AI doesn't come at the cost of the greater good of society.
How Will Europe's AI Act Get Executed?The responsibility to put the AI act into motion is on the individual national authorities in each EU country. Market surveillance will begin on August 2, 2025, so all businesses will have at least a year to adapt to the changes.The European Commission's AI Office will take a supervisory role and will be supported by three advisory groups:
- The European Artificial Intelligence Board
- A panel of independent scientific experts
- An advisory forum of diverse stakeholders
Additionally, an AI Board consisting of each member state will be set up to ensure consistent implementation of the act.
How Is the Industry Responding to the European Artificial Intelligence Act?The European Union Commission's AI office had initiated a call for feedback on the act on July 30, which is expected to run until September 10. However, the feedback window coincides with the summer recesses, thereby inhibiting businesses' ability to make any significant contribution.
A bunch of popular tech organizations, such as The Software Alliance, DOT Europe, AmCham EU, and the Computer & Communications Industry Association, came together to pen an open letter to the EU, asking for more time to comply with the AI act.They further said that although they're aware of the short deadline, they also believe that quality should prevail over speed. The EU Commission has yet to respond to this request, though.
Earlier, in July 2023, Europe's then proposed AI Act was slammed by businesses and tech firms for its severity. Major European companies like Renault, Dassault, Siemens, Heineken, Airbus, and Deutsche Telekom had signed an open letter, demanding a less bureaucratic" approach and fewer regulations.
The post The European Artificial Intelligence Act Is Here: Everything You Need to Know about the First Ever AI Law appeared first on The Tech Report.