AI in Europe: An Evolving Landscape
What is "AI," how it is being regulated in Europe, and what does the legal future hold for AI-powered software like CLM?
Whether it’s detecting a visitor at your door with your Ring doorbell, identifying a song on the radio with your Shazam app, or unlocking your phone with facial recognition software – artificial intelligence (AI) is everywhere in our lives. What was once touted as the future is now the reality of today.
As with most major technological advancements, the explosion of AI in our daily life is associated with a host of important questions and possibilities, not just for the individual, but also for businesses and organisations. Although there is much to celebrate about the emergence of AI, it also has the potential to pose substantial risks, like magnifying inherent bias, fetching inaccurate data, or, worse, making fatal safety errors. This has led to an emerging regulatory landscape for AI, particularly within the European Union. Along with it is a heightened cost and risk of complying with AI regulations, and the threat of missed obligations between organizations of every shape and size.
Let’s start with the basics: what “AI” actually is, how it is being regulated in Europe, and what the legal future holds for AI-powered software like contract lifecycle management (CLM).
Types of AI
“Artificial intelligence,” also known simply as AI, is an umbrella term for a variety of different technologies, including machine learning; natural language processing; review, analysis and extraction; and creative generation.
The four main types of AI are:
- Machine learning (ML): The creation of algorithms through user-led training.
- Natural language processing (NLP): Extracting intent from unstructured user requests.
- Review, analysis & extraction: Using ML algorithms to find, review, and highlight meaningful data within a given data set.
- Generative AI: Using ML algorithms and deep processing to create uniquely new content. A popular example of this is ChatGPT.
How AI Impacts Legal Agreements
Manual contracting and review are time-consuming processes that are prone to human error. CLM systems powered by AI, however, are eliminating manual tasks and changing the way contracting professionals handle contract creation and negotiation.
The four main ways AI will affect legal agreements are:
- Machine learning (ML): Currently, the most common offerings on the marketplace are generic algorithms, based on files that are publicly available, using the judgment of whoever identified each instance of a given datum. This creates broadly-accurate algorithms. Machine learning experts estimate a good-to-excellent accuracy rate on algorithms at 70%-90%.
- Natural language processing (NLP): Just as Bing and Google are delivering the ability to search the Internet more easily, CLM vendors are improving access to contract data. Thanks to NLP, authorized non-legal individuals can find contracts – and extract valuable information from within them – without having to learn how to construct jargon-heavy searches.
- Review, analysis & extraction: Initial reviews use ML-generated algorithms to capture key terms and clauses, highlight them for further analysis, and extract them to be used as easily-reportable metadata. This reduces the time to “onboard” a contract at the front end, thereby freeing up contracting professionals to focus on strategy and negotiations, and makes the vital information contained in stored contracts easy to analyze, report on, and use for strategic purposes.
- Generative AI: Although creative generation is a challenge AI still struggles to solve effectively, there are libraries of clauses that can be connected to one another to create a total contract. Future iterations of AI may be able to intuit from a natural language request what clauses need to be combined to make a proper contract.
Evolving Regulations in Europe
The EU AI Act officially went into effect on August 1, 2024, but it won’t become fully effective until August 2, 2026. This gives businesses and developers some time to get up to speed with the new rules, making sure they’re in compliance with the requirements for things like risk assessments and transparency, especially for higher-risk AI systems. While the full regulations will roll out in 2026, there are some provisions in Article 113 that will also kick in earlier, addressing certain urgent aspects of AI governance.
The AI Act categorizes AI systems based on their risk level. Unacceptable risks are outright banned, such as social scoring systems and AI designed to manipulate users. Many of the regulations focus on high-risk AI systems, such as biometric identification systems, AI systems used critical infrastructure, and AI in recruitment systems, which are strictly governed. A smaller portion of the Act addresses limited-risk AI systems, like chatbots or deepfakes, which require transparency – developers and deployers need to make sure users know they’re interacting with AI. Lastly, minimal-risk AI systems, which include things like video games or spam filters, are largely unregulated, though this is starting to shift with the rise of generative AI. Most of the regulations apply to the providers (developers) of high-risk AI systems.
Those looking to deploy high-risk AI systems in the EU, whether they’re based in the EU or outside of it, must comply with the Act. Providers from third countries (countries not part of the EU) are also affected if their AI system’s output is used in the EU. While users (deployers) of high-risk AI systems also have some obligations, their responsibilities are lighter compared to the providers. Providers of General Purpose AI (GPAI) models must follow certain rules, including offering technical documentation, respecting copyright, and summarizing the content used to train the models.
However, providers of free and open-license GPAI models only need to adhere to copyright rules and provide training data summaries if their models present a systemic risk. If a GPAI model does present such a risk, providers must also conduct thorough evaluations, perform adversarial testing, report serious incidents, and ensure cybersecurity measures are in place.
In addition to the EU AI Act, several other laws and regulations play a significant role in shaping AI development and usage in the EU. For instance, the General Data Protection Regulation (GDPR), currently in effect, governs the handling of personal data in AI systems, ensuring privacy and data protection. The Product Liability Directive, which recently went into effect on November 18, 2024, will hold manufacturers accountable for harm caused by AI software, providing a pathway for victims to seek compensation. EU member states will have two years to implement the directive into law. Companies who are currently working on products or software should consider this Directive before introducing them into the market. The General Product Safety Regulation enhances safety standards for products, including AI-powered ones, while national intellectual property laws across EU member states provide protections for AI-related innovations. Together, these laws help create a comprehensive legal framework for the responsible development and deployment of AI in the EU.
The EU has also launched an online platform designed to help businesses and organizations check their compliance with the EU AI Act regulations. Use the official EU AI Act Compliance Checker to see which parts of the Act apply to you.
The Biggest Risks of AI
As with any emerging technology, there are risks and limitations associated with using AI.
For example, Generative AI projects still suffer from problems like catastrophic forgetting, when the new things it learns overwrite the old things it used to know; and a lack of transparency because the AI can’t explain why it made a particular decision.
Another risk, particularly for European users, relates to language. AI models in the AI platform being used are based on the natural language of the content being reviewed, which means that sentence structure, grammar, spelling and characters used (e.g. é, ß, ø, å) can all affect the accuracy of an AI model. Most AI models for CLM are trained on American English and U.S. law documents: they may perform extremely poorly when applied to other languages or dialects.
Some CLM vendors offer to translate documents into English first, before applying their AI capabilities, creating the risk of errors being introduced into the translation, which in turn can corrupt the AI’s output. For example, the term “bug,” when used in a software contract, could be translated first as “insect,” leading to all manner of potential challenges. Before implementing any AI solution, organizations should be sure to fully research which languages the models are trained on, and whether they will be able to train the models on their own contract data to improve accuracy.
Lastly, AI relies on content created by human input, which inevitably results in biases. This was readily apparent with the resume-sorting algorithm used for a short time at Amazon. What was meant to help cut down on manual processing of resumes resulted in horribly sexist outcomes for female applicants.
Opportunities for AI in Legal Environments
To truly harness the power of AI while mitigating risk, it’s critical that legal departments feel empowered to take control of their AI environments, finding an acceptable balance between risk and reward.
Self-trained AI models bring controls “inside the tent” of Legal departments. Legal professionals use their own internal files to “train” the AI models, using their own documents and resources to identify key terms and clauses that are uniquely valuable to their organisation and industry. For example, biotech organisations might not just need to know if a contract has an indemnity clause, but whether it has a specifically-formulated pharma R&D indemnity clause.
When implemented in a targeted, context-specific way, AI can virtually clone the best of the group’s resources, becoming a force multiplier that allows an organization’s staff to get most repetitive tasks done quickly and efficiently, relieving them to focus on negotiating the best deals at the lowest levels of risk possible.
Conclusion
The era of artificial intelligence is here in Europe, bringing with it exciting opportunities and challenging risks. AI is poised to transform contract management, especially when embedded in a sophisticated CLM system. As with any emerging technology, the regulatory considerations are still in flux, bringing a range of challenges that will settle only with time, especially in the EU. Yet, for all those challenges, the opportunity to do more with less, using bespoke algorithms built by the very best practitioners, will surely transform the contracting process.
Recent
Posts
Discover how insights from analyst firms can help guide you in selecting the best Contract Lifecycle Management (CLM) platform for your business.
Discover best practices, key elements, and expert tips for drafting effective non-disclosure agreements (NDAs).
Learn from Spectrum Mobility, a partner and now customer of Agiloft, how taking control of your pre-signature process impacts your entire contract lifecycle.