AIMultiple ResearchAIMultiple ResearchAIMultiple Research
We follow ethical norms & our process for objectivity.
This research is not funded by any sponsors.
AI governance
Updated on Jan 9, 2025

AI Compliance: Top 6 challenges & case studies in 2025

The rise in artificial intelligence (AI) usage is prompting new laws, regulations, and ethical standards. Therefore, 77% of companies view AI compliance as a top priority, and 69% have already adopted responsible AI practices to check for AI compliance status and identify potential compliance risks in their business. 1

Explore what AI compliance is, its challenges and real-life examples where AI models fail to comply with ethical standards and law:

Worldwide search trends for AI Compliance until 05/08/2025

What is AI compliance?

AI compliance refers to the process of ensuring that AI systems comply to all relevant laws, regulations, and ethical standards. This involves making sure that:

  • AI tools are not used in ways that are illegal, discriminatory, deceptive, or harmful. 
  • The data used to train these systems is collected and utilized in a legal and ethical manner. 
  • AI technologies are employed responsibly and for the benefit of society.

What is the benefits of AI compliance?

AI compliance can ensure: 

  • Regular compliance and risk management by ensuring the legal and ethical use of AI systems.
  • Individuals’ privacy and security by ensuring the proper handling of personal data. 
  • Decision-making processes, leading to more accurate and trustworthy AI outputs.
  • Interoperability of AI systems facilitating smoother integration with other systems and technologies, improving efficiency and collaboration across different platforms. 
  • Protection of organizations from potential legal and financial risks, such as fines, penalties, or legal action.
  • Better reputation of the organization and trust with customers, stakeholders, and the public by demonstrating a commitment to ethical AI practices.

Why is AI compliance important?

AI compliance gains importance due to:

  1. Increasing adoption of AI: AI stats suggest that
    • 90% of commercial enterprise apps are expected to use AI by next year.
    • 9 out of 10 top businesses have ongoing investments in AI.
  2. Surge in interest in generative AI: As IT automation trends explains.
    • Since the launch of ChatGPT in 2022, businesses have reported a 97% increase in interest in developing generative AI models.
    • Adoption rates of machine learning pipelines to enhance generative AI strategies have risen by 72%
  3. Need for effective data governance: According to AI stats:
    • With generative AI expected to create 10% of all generated data by 2025, effective data governance is crucial for ensuring data integrity and regulatory compliance.
  4. Raising ethical concerns: Due to real-life examples with lack of AI compliance and responsible AI practices, such as biased models and chatbots with discriminatory behavior and hate speech. For more real-life examples see below:

Real-life examples of a lack of AI compliance

Here are some real-life examples of companies facing reputation issues and postponing their AI projects due to unethical consequences. These examples led to investments in AI compliance management and responsible AI efforts by these companies. 

1. Deepfakes

Deepfakes are AI-generated media that convincingly alter appearances, voices, or actions, and can be unethically used for:

  • Financial fraud by scammers to impersonate voices for unauthorized money transfers.
  • Cyberbullying to create fake and harmful images or videos for harassment.
  • Data manipulation to mislead media, alter public perception, affect elections, or cause crises.
  • False testimony to produce fake evidence in legal proceedings, risking wrongful convictions.
  • Privacy violations to create unauthorized and explicit content, often targeting individuals without their consent.

For example, a video falsely featuring Senior Prime Minister Lee Hsien Loong promoting an investment product highlights the dangers of artificial intelligence in spreading misinformation.2 Here is the deepfake video the Prime Minister: 

2. Gender bias in AI-based hiring tool

In 2018, Amazon discontinued an AI-based hiring tool after it was found to be biased against women.3 The machine learning model behind the tool tended to favor male candidates, reflecting the male dominance in the tech industry. This bias led to concerns about the fairness and accuracy of AI in hiring decisions, prompting Amazon to remove the tool to avoid perpetuating gender inequality.

3.Racial bias

3.1 Racial bias in COMPAS

The COMPAS tool, used to predict the likelihood of reoffending among U.S. criminals, was found to exhibit racial bias.4 A 2016 ProPublica investigation revealed that COMPAS was more likely to classify black defendants as high risk compared to white defendants, even when controlling for factors like prior crimes and age. Some of its biased results included:

  • Misclassified almost twice as many black defendants (45%) as high risk compared to white defendants (23%).
  • Incorrectly labeled more white defendants as low risk, with 48% reoffending compared to 28% of black defendants.

3.2. Racial Bias in U.S. Healthcare Algorithm

An AI algorithm used in U.S. hospitals to predict patient needs was biased against black patients.5

The algorithm based its predictions on healthcare costs, failing to account for racial disparities in healthcare payment. As a result, black patients were given lower risk scores and received less care compared to white patients with similar health conditions. This bias led to unequal access to necessary medical care.

4. Discriminatory behavior of chatbots 

4.1 Tay

In 2016, Microsoft launched Tay, a chatbot on Twitter intended to learn from user interactions.6 Within 24 hours, Tay began posting racist, transphobic, and antisemitic tweets after learning from inflammatory messages sent by users. Despite initial data filtering efforts, Tay’s behavior highlighted the dangers of AI systems learning from public interactions without proper safeguards.

The image shows how Tay by Microsoft denies Holocaust due to its learning experience on Twitter.
Figure 1: Tay’s tweets example7

4.1 Neuro-sama

Another example is the Neuro-sama, an AI-powered VTuber who streams on Twitch and interacts with viewers as if she were a human streamer.8

In 2023, her Twitch channel was temporarily banned due to hateful conduct, likely related to controversial comments made by the AI, including questioning the Holocaust. Following this incident, the creator, Vedal, updated the chat filter to prevent similar issues. 

Here is an image of Neuro-sama:

Neuro-sama the AI VTuber that is updated with a chat filter to ensure AI compliance.
Figure 2: Neuro-sama the AI VTuber.

Check out for more ethical AI use cases and real-life examples. 

AI compliance challenges

Here are some AI compliance challenges that require implementing tools and practices:

1. Navigating global regulations

AI compliance involves meeting a variety of international regulations, such as the EU AI Act, US Executive Orders, and Canada’s AIDA. Each of these regulations has unique requirements that are constantly evolving, creating a complex landscape for organizations operating globally. Compliance demands careful alignment of AI systems with the specific legal frameworks of each region to avoid penalties and legal issues.

The table below summaries the current legal requirements AI models should comply with: 

Last Updated at 08-19-2024
Countries/OrganizationsStageFocusComing Next
OECDGovernance

- 42 signatories.
- 5 principles for responsible stewardship of trustworthy AI:
inclusive growth, human-centered and fairness; transparency and explainability; robustness; accountability.
- Recommendations for national policies.

The EURegulation

- Binding for high-risk activities (Sector X Impact), optional with possibility for Label for others.
- Up to 4% of generated annual revenues as fine for non-compliance.
- Specifically targeting model fairness, robustness and auditability, mixing policies and controls.
- Integrating strong ethical considerations on environmental and social impacts.

Implemented in August 1, 2024
CanadaDevelopment

- Binding private sector privacy framework including AI explainability.
- Regulator can require demonstration of compliance and issue fines.
Largely inspired by EU regulation and discussions.

Bill C-11 in revue
SingaporeCompliance

- Positive, non-sanction-based approach.
- Focusing on practical steps and best practices to implementing AI governance at an organization level.
- Any B2C model should be explainable, transparent, and fair.

The USAGovernance

- Federal guidelines issued to prepare ground for industry-specific guidelines or regulation.
- Focus on public trust and fairness. No broader ethics considerations.

FTC plans to go after companies using and selling biased algorithms
The UKGovernance

- UK AI Council AI Roadmap released in Jan 2021.
- Appeal for increased investments and governance.
- Based on the principles of the Alan Turing Institute.

Work launched by Office for AI
AustraliaGovernanceDetailed guidelines issued, integrating ethical and a strong focus on end-customer protection.Further guidance

2. Risk-based regulation

The EU AI Act introduced AI systems risk categories (unacceptable, high, limited, low), with each category carrying specific regulatory obligations. High-risk AI systems require more stringent compliance measures, including thorough documentation and transparency protocols. 

However, it is challenging to assess the risk level of each AI system and ensure that it meets the corresponding regulatory requirements. For instance, 47% of organizations have an AI risk management framework. Yet, 70% lack ongoing monitoring and controls. Misclassification can lead to non-compliance and significant repercussions.9

3. Managing new obligations

New laws, such as the AI Act, AI Liability Directive, and Product Liability Directive, impose additional responsibilities on organizations. These laws require the implementation of safety mechanisms, regular audits, and thorough documentation for AI systems.

Organizations must adapt their processes to meet these new standards, which can be resource-intensive and may require restructuring existing compliance practices, considering the AI Act’s risk-based approach. 

4. Coordinating across the compliance team

AI compliance requires collaboration across multiple teams, including legal, data governance, and technical development. Each team has a role in ensuring that AI systems align with regulatory requirements. 

Effective coordination is essential to avoid miscommunication and ensure that all aspects of compliance are addressed. The continuous monitoring and adjustment of AI systems to maintain compliance adds to the complexity. 

5. Cross-functional responsibility

AI compliance is often confined to the Chief Data Officer (CDO) or equivalent, but this narrow focus can be limiting. Only 4% of organizations have a cross-functional team dedicated to AI compliance. 10

Broad organizational commitment and involvement from senior leadership are essential to establish compliance as a priority across all functions and to secure the necessary resources.

6. Technical safeguards

Ensuring that AI algorithms adhere to ethical guidelines, transparency, and data protection principles is a significant challenge, particularly for high-risk systems. 

Compliance requires developing algorithms that are fair, non-discriminatory, and secure, which can be technically demanding. Organizations must invest in expertise and tools to meet these standards without hindering innovation.

AI compliance tools

An AI compliance tool is a centralized platform where technical, business, and risk and compliance teams can collaborate, document, and manage the compliance of AI projects to navigate the complex regulatory landscape associated with AI systems. 

Some technologies that can achieve AI compliance include:

Broad technologies

  1. AI governance tools designed to monitor, manage, and enforce policies around AI systems to ensure they meet regulatory standards.
  2. Responsible AI platform help to ensure AI systems are ethical, transparent, and fair, helping organizations meet compliance requirements.
  3. LLMOps (Large Language Model Operations) provide operational frameworks and tools for managing large language models with compliance and ethical considerations in mind.
  4. MLOps (Machine Learning Operations) integrate machine learning models into production environments while ensuring compliance, security, and operational efficiency.
  5. Data governance can manage and oversee data assets, ensuring that data handling practices meet regulatory and organizational standards.
The image lists five broad technologies used in AI compliance such as AI governance tools, responsible AI, MLOps, LLMOps and data governance.
Figure 3: AI Compliance broad technologies

Specific technologies 

  1. Data Privacy Management Tools
    Software designed to manage and protect sensitive data, ensuring compliance with data protection regulations like GDPR and CCPA.
  2. Model Explainability Tools
    Technologies that provide transparency into AI decision-making processes, aiding in meeting regulatory requirements for explainability and fairness.
  3. AI Risk Management Platforms
    Tools that help identify, assess, and mitigate risks associated with AI systems, ensuring compliance with regulatory and ethical standards.
  4. Bias Detection and Mitigation Tools
    Technologies that detect and reduce bias in AI models, helping organizations meet compliance requirements related to fairness and non-discrimination.
  5. Security and Compliance Monitoring Tools
    Solutions that continuously monitor AI systems for security threats and compliance with regulatory standards, providing alerts and automated responses when issues are detected.

Further reading

Explore more on responsible AI, ethical AI and AI compliance related technologies and developments by checking our:

External sources

Share This Article
MailLinkedinX
Hazal is an industry analyst at AIMultiple, focusing on process mining and IT automation.

Next to Read

Comments

Your email address will not be published. All fields are required.

0 Comments