3 Things contracting teams need to know about the European Union’s AI Act
The European Union's AI Act was passed in March 2024. Learn exactly how these new regulations can affect your business, and how to prepare.
After being widely discussed since 2021, the European Union (EU) officially passed the Artificial Intelligence Act in March of this year. This marks the first official, widespread, legislation on AI and the responsibilities and regulations associated with developing and using buzzworthy, emerging technology. It remains to be seen if we can expect the AI Act to deliver the similar “Brussels Effect” that we’ve seen in regulations past, especially after seeing instances like the (US-based) nationwide Tesla recall due to autopilot complications.
So how will this affect you and your business?
Let’s explore the top three takeaways that contracting professionals should keep in mind when it comes to the AI Act, as it stands today.
1. When does it go into effect?
With official legislation being passed on March 13, 2024, when do these regulations take effect? Similar to the General Data Protection Regulation (GDPR) requirements originally passed in 2016, the EU will allow companies time to ensure that the technology they’re creating or utilizing is in compliance with the AI Act. While GDPR regulations saw a timeline of ~ two years, it’s predicted that pieces of the new AI Act could go into effect as early as the next six months, with an overall timeline of the similar two-year mark. So, the time to start is now. Again, similar to GDPR practices, these regulations are not limited to EU-based companies, but any companies that are marketed or used within the EU.
2. What is the risk-level approach?
The regulations explored within the AI Act are related directly to the “risk-level” that an organization’s AI use or development falls under. This risk-level is most often determined by either the sensitivity of the data involved, or the AI use-case unique to the application or its parts. This approach divides AI products and usage into 4 distinct categories.
These risk levels include:
- Unacceptable risk: Any product or practice that falls into an unacceptable risk level is expressly prohibited within the EU. These include “All AI systems considered a clear threat to the safety, livelihoods and rights of people will be banned, from social scoring by governments to toys using voice assistance that encourages dangerous behaviour.”
- High risk: The AI products and practices labeled high-risk is a much more expansive list, including everything from self-driving vehicles and “robot-arm” surgical devices to CV-screening HR and self-grading education platforms.
- Limited risk: This tier focuses heavily on transparency within AI usage. For example, if you’re interacting with a chatbot or scrolling past an image that was AI-generated, they must be explicitly labeled as such.
- Minimal risk: Things like AI-enabled video games and spam filters – although this has adjusted a bit with the introduction of Generative AI. Currently, “The vast majority of AI systems currently used in the EU fall into this category.”
3. What is my or my organization’s compliance responsibility?
If you work at a company that creates or utilizes high-risk AI tools, it’s important to ensure that your product is functioning within complete compliance of the AI Act, as those who fail to comply are subject to a tiered approach of penalties—and those penalties aren’t cheap. “with the most serious reserved for those breaking the “unacceptable uses” ban. At the top end are fines of up to 30 million euros, or 6% of the company’s global turnover (whichever is higher).” The European AI office was developed to not only enforce these new regulations, but also to assist in assessments and preparation to ensure your organization remains compliant.
One of the key takeaways is, “Meaningful human oversight is [a] cornerstone of the EU AI Act,” it’s important to maintain close connection with how AI is being utilized throughout your organization – even if you’re simply leveraging a more high-risk product to fuel your more minimal- to low-risk operation.
What does this mean for the future of contracting?
While most workflow automation tools, like a Contract Lifecycle Management (CLM) solution, might fall into a minimal- to low-risk tier of AI practice, it’s important to be aware of how you’re using it throughout the organization.
For example, if utilizing a Generative AI tool to create or amend a contract, one’s responsibility comes in on both the input and output side. What question or prompt did they use in order to generate this clause or document? Was the directive posed in a way that could create a non-compliant outcome? Is the output or answer received by this AI-model acceptable as is, or does it require editing on the user’s part to ensure that when this contract is sent and signed, it’s following all AI Act guidelines?
It’s possible that the next “it” job on the market will be directly AI related, with the EU AI office already suggesting the creation of Chief AI Officers and compliance specialists, ensuring this fast-growing technology is used the right way.
The clock is officially ticking. Are you maximizing AI tools across your operation? It’s time to evaluate your operation and ensure you’re not only compliant with the AI Act, but that you keep an eye on this rapidly evolving technology space.
Recent
Posts
Learn about the realities of AI today, its limitations and capabilities, and its use as a “force multiplier” for contracting.
If there is one message for tech buyers as we approach 2024, it is that AI is here – ready or not.
With the introduction of ConvoAI, Agiloft delivers the same benefits of simplified AI experiences to the world of contracts.