The rise in artificial intelligence (AI) usage is prompting new laws, regulations, and ethical standards. Therefore, 77% of companies view AI compliance as a top priority, and 69% have already adopted responsible AI practices to check for AI compliance status and identify potential compliance risks in their business. 1
Explore what AI compliance is, its challenges and real-life examples where AI models fail to comply with ethical standards and law:
What is AI compliance?
AI compliance refers to the process of ensuring that AI systems comply to all relevant laws, regulations, and ethical standards. This involves making sure that:
- AI tools are not used in ways that are illegal, discriminatory, deceptive, or harmful.
- The data used to train these systems is collected and utilized in a legal and ethical manner.
- AI technologies are employed responsibly and for the benefit of society.
What is the benefits of AI compliance?
AI compliance can ensure:
- Regular compliance and risk management by ensuring the legal and ethical use of AI systems.
- Individuals’ privacy and security by ensuring the proper handling of personal data.
- Decision-making processes, leading to more accurate and trustworthy AI outputs.
- Interoperability of AI systems facilitating smoother integration with other systems and technologies, improving efficiency and collaboration across different platforms.
- Protection of organizations from potential legal and financial risks, such as fines, penalties, or legal action.
- Better reputation of the organization and trust with customers, stakeholders, and the public by demonstrating a commitment to ethical AI practices.
Why is AI compliance important?
AI compliance gains importance due to:
- Increasing adoption of AI: AI stats suggest that
- 90% of commercial enterprise apps are expected to use AI by next year.
- 9 out of 10 top businesses have ongoing investments in AI.
- Surge in interest in generative AI: As IT automation trends explains.
- Since the launch of ChatGPT in 2022, businesses have reported a 97% increase in interest in developing generative AI models.
- Adoption rates of machine learning pipelines to enhance generative AI strategies have risen by 72%
- Need for effective data governance: According to AI stats:
- With generative AI expected to create 10% of all generated data by 2025, effective data governance is crucial for ensuring data integrity and regulatory compliance.
- Raising ethical concerns: Due to real-life examples with lack of AI compliance and responsible AI practices, such as biased models and chatbots with discriminatory behavior and hate speech. For more real-life examples see below:
Real-life examples of a lack of AI compliance
Here are some real-life examples of companies facing reputation issues and postponing their AI projects due to unethical consequences. These examples led to investments in AI compliance management and responsible AI efforts by these companies.
1. Deepfakes
Deepfakes are AI-generated media that convincingly alter appearances, voices, or actions, and can be unethically used for:
- Financial fraud by scammers to impersonate voices for unauthorized money transfers.
- Cyberbullying to create fake and harmful images or videos for harassment.
- Data manipulation to mislead media, alter public perception, affect elections, or cause crises.
- False testimony to produce fake evidence in legal proceedings, risking wrongful convictions.
- Privacy violations to create unauthorized and explicit content, often targeting individuals without their consent.
For example, a video falsely featuring Senior Prime Minister Lee Hsien Loong promoting an investment product highlights the dangers of artificial intelligence in spreading misinformation.2 Here is the deepfake video the Prime Minister:
2. Gender bias in AI-based hiring tool
In 2018, Amazon discontinued an AI-based hiring tool after it was found to be biased against women.3 The machine learning model behind the tool tended to favor male candidates, reflecting the male dominance in the tech industry. This bias led to concerns about the fairness and accuracy of AI in hiring decisions, prompting Amazon to remove the tool to avoid perpetuating gender inequality.
3.Racial bias
3.1 Racial bias in COMPAS
The COMPAS tool, used to predict the likelihood of reoffending among U.S. criminals, was found to exhibit racial bias.4 A 2016 ProPublica investigation revealed that COMPAS was more likely to classify black defendants as high risk compared to white defendants, even when controlling for factors like prior crimes and age. Some of its biased results included:
- Misclassified almost twice as many black defendants (45%) as high risk compared to white defendants (23%).
- Incorrectly labeled more white defendants as low risk, with 48% reoffending compared to 28% of black defendants.
3.2. Racial Bias in U.S. Healthcare Algorithm
An AI algorithm used in U.S. hospitals to predict patient needs was biased against black patients.5
The algorithm based its predictions on healthcare costs, failing to account for racial disparities in healthcare payment. As a result, black patients were given lower risk scores and received less care compared to white patients with similar health conditions. This bias led to unequal access to necessary medical care.
4. Discriminatory behavior of chatbots
4.1 Tay
In 2016, Microsoft launched Tay, a chatbot on Twitter intended to learn from user interactions.6 Within 24 hours, Tay began posting racist, transphobic, and antisemitic tweets after learning from inflammatory messages sent by users. Despite initial data filtering efforts, Tay’s behavior highlighted the dangers of AI systems learning from public interactions without proper safeguards.

4.1 Neuro-sama
Another example is the Neuro-sama, an AI-powered VTuber who streams on Twitch and interacts with viewers as if she were a human streamer.8
In 2023, her Twitch channel was temporarily banned due to hateful conduct, likely related to controversial comments made by the AI, including questioning the Holocaust. Following this incident, the creator, Vedal, updated the chat filter to prevent similar issues.
Here is an image of Neuro-sama:

Check out for more ethical AI use cases and real-life examples.
AI compliance challenges
Here are some AI compliance challenges that require implementing tools and practices:
1. Navigating global regulations
AI compliance involves meeting a variety of international regulations, such as the EU AI Act, US Executive Orders, and Canada’s AIDA. Each of these regulations has unique requirements that are constantly evolving, creating a complex landscape for organizations operating globally. Compliance demands careful alignment of AI systems with the specific legal frameworks of each region to avoid penalties and legal issues.
The table below summaries the current legal requirements AI models should comply with:
Countries/Organizations | Stage | Focus | Coming Next |
---|---|---|---|
OECD | Governance | - 42 signatories. | |
The EU | Regulation | - Binding for high-risk activities (Sector X Impact), optional with possibility for Label for others. | Implemented in August 1, 2024 |
Canada | Development | - Binding private sector privacy framework including AI explainability. | Bill C-11 in revue |
Singapore | Compliance | - Positive, non-sanction-based approach. | |
The USA | Governance | - Federal guidelines issued to prepare ground for industry-specific guidelines or regulation. | FTC plans to go after companies using and selling biased algorithms |
The UK | Governance | - UK AI Council AI Roadmap released in Jan 2021. | Work launched by Office for AI |
Australia | Governance | Detailed guidelines issued, integrating ethical and a strong focus on end-customer protection. | Further guidance |
2. Risk-based regulation
The EU AI Act introduced AI systems risk categories (unacceptable, high, limited, low), with each category carrying specific regulatory obligations. High-risk AI systems require more stringent compliance measures, including thorough documentation and transparency protocols.
However, it is challenging to assess the risk level of each AI system and ensure that it meets the corresponding regulatory requirements. For instance, 47% of organizations have an AI risk management framework. Yet, 70% lack ongoing monitoring and controls. Misclassification can lead to non-compliance and significant repercussions.9
3. Managing new obligations
New laws, such as the AI Act, AI Liability Directive, and Product Liability Directive, impose additional responsibilities on organizations. These laws require the implementation of safety mechanisms, regular audits, and thorough documentation for AI systems.
Organizations must adapt their processes to meet these new standards, which can be resource-intensive and may require restructuring existing compliance practices, considering the AI Act’s risk-based approach.
4. Coordinating across the compliance team
AI compliance requires collaboration across multiple teams, including legal, data governance, and technical development. Each team has a role in ensuring that AI systems align with regulatory requirements.
Effective coordination is essential to avoid miscommunication and ensure that all aspects of compliance are addressed. The continuous monitoring and adjustment of AI systems to maintain compliance adds to the complexity.
5. Cross-functional responsibility
AI compliance is often confined to the Chief Data Officer (CDO) or equivalent, but this narrow focus can be limiting. Only 4% of organizations have a cross-functional team dedicated to AI compliance. 10
Broad organizational commitment and involvement from senior leadership are essential to establish compliance as a priority across all functions and to secure the necessary resources.
6. Technical safeguards
Ensuring that AI algorithms adhere to ethical guidelines, transparency, and data protection principles is a significant challenge, particularly for high-risk systems.
Compliance requires developing algorithms that are fair, non-discriminatory, and secure, which can be technically demanding. Organizations must invest in expertise and tools to meet these standards without hindering innovation.
AI compliance tools
An AI compliance tool is a centralized platform where technical, business, and risk and compliance teams can collaborate, document, and manage the compliance of AI projects to navigate the complex regulatory landscape associated with AI systems.
Some technologies that can achieve AI compliance include:
Broad technologies
- AI governance tools designed to monitor, manage, and enforce policies around AI systems to ensure they meet regulatory standards.
- Responsible AI platform help to ensure AI systems are ethical, transparent, and fair, helping organizations meet compliance requirements.
- LLMOps (Large Language Model Operations) provide operational frameworks and tools for managing large language models with compliance and ethical considerations in mind.
- MLOps (Machine Learning Operations) integrate machine learning models into production environments while ensuring compliance, security, and operational efficiency.
- Data governance can manage and oversee data assets, ensuring that data handling practices meet regulatory and organizational standards.

Specific technologies
- Data Privacy Management Tools
Software designed to manage and protect sensitive data, ensuring compliance with data protection regulations like GDPR and CCPA. - Model Explainability Tools
Technologies that provide transparency into AI decision-making processes, aiding in meeting regulatory requirements for explainability and fairness. - AI Risk Management Platforms
Tools that help identify, assess, and mitigate risks associated with AI systems, ensuring compliance with regulatory and ethical standards. - Bias Detection and Mitigation Tools
Technologies that detect and reduce bias in AI models, helping organizations meet compliance requirements related to fairness and non-discrimination. - Security and Compliance Monitoring Tools
Solutions that continuously monitor AI systems for security threats and compliance with regulatory standards, providing alerts and automated responses when issues are detected.
Further reading
Explore more on responsible AI, ethical AI and AI compliance related technologies and developments by checking our:
- Explainable AI (XAI): Guide to enterprise-ready AI
- Bias in AI: What it is, Types, Examples & 6 Ways to Fix it
- Responsible AI: 4 Principles & Best Practices
- Generative AI Ethics: Top 6 Concerns
External sources
- 1. From AI Compliance to Competitive Advantage | Accenture. Accenture
- 2. Ng Hong Siang & Firdaus Hamzah (2023). “PM Lee urges vigilance against deepfakes after ‘completely bogus’ video of him emerges.” CNA. Retrieved at August 19, 2024.
- 3. Amazon scrapped 'sexist AI' tool. BBC News
- 4. Larson J., Mattu S., Kirchner L., & Angwin (2016). “How We Analyzed the COMPAS Recidivism Algorithm.” ProPublica. Retrieved at August 19, 2024.
- 5. Obermeyer, Powers, Vogel & Mulleinathan (2019). “Dissecting racial bias in an algorithm used to manage the health of populations.” Science. Retrieved at August 19, 2024.
- 6. Learning from Tay’s introduction - The Official Microsoft Blog.
- 7. “Microsoft scrambles to limit PR damage over abusive AI bot Tay.” The Guardian. 2016. Retrieved October 1, 2024.
- 8. “Neuro-sama.” Wikipedia.Retrieved at August 19, 2024.
- 9. From AI Compliance to Competitive Advantage | Accenture. Accenture
- 10. Ibid.
Comments
Your email address will not be published. All fields are required.