EU AI Act - Implications for Conversational AI

In February 2024, the EU’s Artificial Intelligence (AI) Act was approved by the EU member states. This post sheds light on the obligations that providers and deployers of conversational AI systems, such as chatbots, have under the Act—and the extent to which these remain unsettled
Amelie Berz
Mar 3, 2024
Back to news & insights
/
Insights
/

The EU’s AI Act aims to increase trust in AI and to ensure that the technology is used in a way that respects the fundamental rights and safety of EU citizens. While its initial objective was to provide AI providers and deployers with legal certainty on how to engage with AI, the final draft has been criticized for creating significant uncertainty.

Scope and Application

The EU AI Act applies to a wide array of AI systems, particularly those processing data and making decisions. The Act is designed to regulate AI systems’ development, distribution, and usage across both public and private entities, and pertains across the board to various sectors of the economy, including finance, retail, legal services, and others.

Under Article 2(1), the Act applies to providers who place AI systems on the Union market or put them into service, independent of where the providers are established; to deployers of AI systems located within the Union; and to providers and deployers of AI systems that are located in a third country, where the system’s output is used in the Union. Conversational AI systems like chatbots may therefore fall within the scope of the Act even if they are neither placed on the Union market nor put into service in the Union.

Usual Case for Conversational AI: ‘Limited Risk’ Systems

The Act categorizes AI systems based on their potential risk to health, safety, and fundamental rights, ranging from minimal risk to unacceptable risk, with specific regulations applied accordingly. The classification of a specific AI system under the EU AI Act depends on its intended use, the severity of the risk it might pose, and the context in which it is deployed. ‘High risk’ systems will include, but are not limited to, systems for decision support in the healthcare sector or systems that determine access to educational institutions, professions, or loans. Conversational AI could be considered as either ‘high-risk’ or ‘limited risk’, depending on their use case.

Most conversational AI would likely fall under the ‘limited risk’ category, where the main requirement is transparency. Under Article 52(1) of the AI Act, providers shall ensure that AI systems intended to interact with natural persons inform users that they are engaging with an AI system, unless this fact is already apparent from the circumstances and the context of use. In the case of AI chatbots, users must be aware that they are not conversing with a human. Thereby, they should be able to understand the nature of the engagement and respond appropriately: for example, to adapt communication in light of the technology's limitations and capabilities. Especially generated or altered realistic images, sounds, or videos (‘deepfakes’) must be clearly marked as artificially created content. Beyond that, the EU’s legal framework (e.g. the General Data Protection Regulation and the Charter of Fundamental Rights) require that personal data is processed securely and the system operates non-discriminatory. Maintaining records of system operations and decisions may be encouraged for accountability.

Exceptional Case for Conversational AI: ‘High Risk’ Systems

If a conversational AI system is deployed in a sensitive sector (e.g., healthcare diagnostics, legal advice, or critical infrastructure), it might be subject to the high-risk category. This entails more stringent regulatory requirements: Article 9(1) stipulates that a risk management system must be set up for the systems. Known and foreseeable risks must be identified, analyzed, and managed in the best possible way. Para. 4 provides for an obligation to inform users of ‘residual risks’. The range of possible risks associated with conversational AI, like for most advanced AI systems, is great: data privacy issues, security vulnerabilities, inaccuracies and unintended bias in responses, misuse for harmful purposes, legal compliance.

The quality of the training, validation, and test data sets of the high-risk AI systems must ensure that the system functions as intended and without discrimination. According to Article 10(3), the data must be representative and complete with regard to the intended purpose of the system. Para. 2 requires appropriate data collection, data processing and analysis with regard to possible biases and gaps. To counteract the opacity of complex AI systems, Article 13 prescribes transparent operation of the systems. The systems must be provided with clear instructions for use, which includes the capabilities and performance limits of the high-risk AI system and, where applicable, information on the data sets used.

Article 19 and 43 require providers to undergo a conformity assessment. Article 14 proposes a "human oversight" determined by the provider to ensure that AI does not act independently without human intervention or supervision. Furthermore, high-risk systems must be technically robust, i.e. resistant to risks associated with system boundaries (e.g. errors) and malicious interference. Processes and events during the operation of a high-risk AI system must be automatically recorded; a quality management system should ensure compliance with the Regulation.

Uncertainties

The AI Act abstractly requires risk assessments and documentation to ensure accountability and compliance—but it does not specify which risks are considered ‘known and foreseeable’ by offering technical norms and standards, nor does it offer quantitative guidance on when risks are deemed to be ‘managed’. Such standards should be application-specific and provide clear guidelines (e.g. ‘the F1 score for use case X must be at least Y’) in order to provide a transmission belt between abstract requirements and practical implementation. Corresponding ISO and CEN-CENELEC standards are in progress.

The AI Act also stipulates that public authorities should protect IP rights and trade secrets when enforcing transparency requirements, but again, does not suggest how this should be done. A similar problem arises within the AI value chain spanning providers, deployers, end-users, and possibly intermediaries that ‘fine-tune’ a model. Respective legal mechanisms to protect trade secrets include confidentiality agreements and non-competition clauses, clauses prohibiting reverse engineering in licensing agreements, limits to the number of possible licenses, protective orders from courts, and independent experts. Technical solutions include data and model obfuscation, federated learning, secure multi-party computation, and model watermarking.

Conclusion

Looking forward, the AI Act reflects a shift in the approach to AI regulation, from focusing primarily on mitigating risks and addressing potential negative impacts, to emphasizing and leveraging the positive opportunities and benefits that AI technologies can bring to society. Yet, deployers of AI systems in high-risk use cases will face severe compliance costs; especially SMEs will have to balance these costs with the expected financial benefits and strategic advantages stemming from operational efficiency, scalability, and reduced human error. Most importantly, deployers of both high-risk and limited-risk systems will owe their clients accurate services, not least to avoid contractual and tortious liability. This creates a need for systematic and continuous quality testing. MAIHEM provides solutions for automatic quality assurance to warrant high performance and safety of AI systems. As such, MAIHEM is a strategic ally in navigating the complex compliance landscape for providers  and deployers of conversational AI.

Please reach out to us if you want to learn how we can help your organization.

Related news and insights

View all
The latest AI insights, delivered to your inbox
Email address
Submit
You've been added to our list!
Oops! Something went wrong while submitting the form.