AIMultipleAIMultiple
No results found.

Handle AI Ethics Dilemmas with Frameworks & Tools

Cem Dilmegani
Cem Dilmegani
updated on Sep 9, 2025

Warner Bros. is suing Midjourney, alleging that its AI image generator unlawfully reproduces copyrighted characters, including Superman and Batman. The lawsuit highlights a broader issue: AI systems trained on copyrighted works raise significant concerns about ownership, fairness, and accountability.1

Guidelines, such as UNESCO’s Ethical Impact Assessment and the ethics frameworks, demonstrate how these issues can be managed through transparency, oversight, and the responsible use of AI tools.

Discover how global principles, governance structures, and practical methodologies are shaping the debate on AI ethics and offering pathways to address the ethical challenges of artificial intelligence.

Foundations and definitions of AI ethics

Graph showing top 7 foundations and definitions of AI ethics

Artificial intelligence has become a central part of technological development across various sectors, including healthcare, human resources, and communication. As AI tools expand into everyday life, the ethical implications of their use require structured attention. AI ethics refers to the moral principles and ethical framework that guide AI development, deployment, and evaluation.

Organizations such as SAP and IBM define AI ethics as ensuring trustworthy AI that respects human dignity, reduces AI risks, and addresses ethical concerns throughout the AI lifecycle.

UNESCO frames AI ethics as part of a global human rights approach. This perspective emphasizes the protection of civil liberties and the development of responsible artificial intelligence systems that foster peaceful, inclusive, and sustainable societies.

Several shared ethical principles guide responsible artificial intelligence:

  • Do no harm and proportionality: AI systems must serve legitimate purposes without creating unnecessary harm.
  • Safety and security: AI algorithms should be resilient against adversarial attacks and system failures.
  • Privacy and data protection: Data collection should respect human rights and adhere to strict governance standards.
  • Transparency and explainability: Human interaction with AI programs should be understandable, with decision-making processes explained in accessible terms.
  • Fairness and non-discrimination: The development of artificial intelligence must address historical biases in training data to prevent unethical outcomes.
  • Human oversight: Human judgment should remain in control in areas such as criminal justice, healthcare, and autonomous vehicles.
  • Sustainability: AI technologies should be evaluated for their environmental and economic impact.

Key AI ethics values

Ethical principles are only practical when they are grounded in clear values that guide the development and use of AI systems. These values shape the ethical framework that data scientists, business leaders, and government officials rely on when making decisions about AI applications.

They help ensure that AI technologies remain aligned with human needs and do not result in unethical outcomes.

Human rights and dignity

AI programs must respect fundamental rights and the value of human life. Systems designed for healthcare, criminal justice, or human resources carry significant potential risks if they undermine freedoms or mistreat individuals.

Embedding human dignity into AI development supports responsible AI and protects against misuse by private companies or big tech companies.

Environmental responsibility

AI systems require vast computational resources, and training data for advanced AI models can consume large amounts of energy. Ethical considerations must therefore include the ecological footprint of AI development.

Aligning AI technologies with sustainable goals ensures that intelligent systems contribute to broader societal priorities rather than accelerating environmental harm.

Diversity and inclusion

AI research and AI development should actively involve diverse perspectives to prevent the reinforcement of historical biases. Without attention to diversity, AI algorithms risk replicating human biases in areas like hiring, lending, or social services.

Promoting inclusive participation in the design of AI tools helps make outcomes more ethically acceptable and supports equitable access across entire industries.

Social cohesion

The use of AI has the power to strengthen or weaken social stability. When AI programs are designed with fairness, accountability, and transparency, they can contribute to building peaceful and just societies. If left unchecked, technological development may deepen inequalities and fragment communities.

By promoting AI ethics through cooperation among government regulation, private companies, and civil society, AI applications can support social cohesion rather than undermining it.

Governance and oversight

SAP on AI ethics guidelines

Governance structures ensure that ethical frameworks are effectively implemented in practice. SAP utilizes an AI Ethics Steering Committee and an AI Ethics Handbook to assess the ethical implications of its AI tools.2

  • The AI Ethics Steering Committee comprises senior leaders from across the company, providing direction and approving high-risk use cases. It acts as the central body for decision-making on the ethical implications of AI tools.
  • Complementing this internal function is the AI Ethics Advisory Panel, an external group of experts from academia, policy, and industry who provide independent feedback on SAP’s AI ethics processes. Coordination between these groups is managed by the AI Ethics Office, which maintains the policy, organizes reviews, and serves as the primary point of contact for ethics-related inquiries.
  • To support operational alignment, SAP established the Trustworthy AI Workstream, which ensures adherence to the AI ethics policy across departments, develops training materials, and maintains a global community for responsible AI.
  • Governance also includes structured processes for evaluating use cases. Each AI system is categorized according to risk: minimal-risk cases may proceed with standard checks, high-risk cases require in-depth review and expert panels, and forbidden cases are blocked entirely. Periodic reviews accompany these evaluations to ensure continued compliance. When issues arise, employees and external stakeholders can raise concerns through the company’s Speak Out tool, and corrective measures are applied if fundamental rights are at risk.
  • This governance framework illustrates how oversight is embedded in organizational practice. By combining internal committees, external panels, systematic risk assessments, and transparent feedback mechanisms, SAP promotes AI ethics as a continuous process rather than a one-time compliance activity.

IBM AI Ethics Board

IBM has developed a structured governance system to translate AI ethics from theory into practical application. The AI Ethics Board is a central body comprising leaders from various disciplines across the company. It oversees the deployment and use of AI, reviews individual projects for potential risks, and supports workstreams focused on policy advocacy, education, research, and ethical innovation.3

  • This governance framework spans from high-level principles to actionable guidance. The board provides oversight, guiding teams as they develop and deploy AI in a way that balances innovation with accountability. Its role includes enabling sustainable and transparent AI practices, not simply through abstract concepts, but through case studies and practical tools that other organizations can follow as a model for their own ethics and governance.
  • The board places particular emphasis on an AI ethics by design approach, ensuring that human judgment remains central in AI development. AI is designed to empower, not replace, decision-making. This human-centric focus promotes ethics throughout the AI lifecycle, creating systems that reflect values such as transparency, fairness, and trust.

UNESCO’s recommendations

UNESCO’s recommendation highlights key policy areas for embedding ethical principles into practice.4 These include:

  • Data governance: Ensuring responsible data collection and use to protect civil liberties.
  • Education and research: Expanding AI literacy and training in ethical aspects for developers and data scientists.
  • Healthcare and wellbeing: Regulating AI models in medicine to protect human life and prevent unethical outcomes.
  • Environment and ecosystems: Addressing the energy demands of large AI models and supporting sustainable practices.
  • Economy and labour: Preparing human resources for shifts in employment caused by intelligent systems and smart machines.
  • Gender equality: Promoting equal access and preventing biases in AI development.

Implementation and tools

Initiative / Organization
Focus
Key contributions
Readiness Assessment Methodology (RAM) (UNESCO)
National preparedness
Assesses laws, education, infrastructure; identifies gaps; supports ethical AI roadmaps.
Ethical Impact Assessment (EIA) (UNESCO)
AI risk evaluation
Reviews data, bias, transparency, and accountability; ensures human judgment in decisions.
AI ethics in the workplace (IEAI)
Organizational practices
Promotes ethics guidelines, training, and accountability in daily AI use.
Autonomous Driving Ethics (ANDRE) (IEAI)
Self-driving cars
Models real traffic scenarios; focuses on harm minimization and fair risk distribution.
Accountability framework for AI systems (IEAI)
Defining responsibility
Clarifies duties, explanations, and auditability in complex AI applications.
Center for Human-Compatible AI (CHAI)
Value alignment
Develops models aligned with human judgment; publishes research on AI safety.
AI Now Institute
Social impacts
Examines power, labor, environment, and safety; advocates for public interest AI.
AlgorithmWatch
Algorithm oversight
Publishes reports, tracks ethics guidelines, and analyzes platform accountability.

Turning ethical principles into practice requires structured methodologies and clear accountability. Recent initiatives by UNESCO and research projects at the Institute for Ethics in Artificial Intelligence (IEAI) provide tools for governments, private companies, and AI researchers to operationalize ethical frameworks.

Readiness assessment methodology

The Readiness Assessment Methodology (RAM), developed by UNESCO, evaluates a country’s preparedness to implement responsible AI. It examines legal frameworks, government regulations, social and economic conditions, education, technical capacity, and infrastructure. The goal is to identify strengths and gaps in the national AI ecosystem, enabling governments and business leaders to design effective roadmaps for the development of ethical AI.

The RAM supports government officials by providing a structured approach to measuring preparedness. It also enables comparisons across regions and creates a foundation for cooperation. This helps ensure that AI systems are introduced in ways that respect human rights, reduce ethical risks, and remain aligned with trustworthy AI objectives.5

Ethical impact assessment

The Ethical Impact Assessment (EIA) focuses on evaluating AI systems across the AI lifecycle. It can be applied before deployment (ex ante) or after deployment (ex post). The assessment process involves AI researchers, project teams, and affected communities to identify potential risks and consequences.

The EIA examines issues such as training data quality, algorithmic bias, transparency, auditability, and accountability. It poses critical questions:

  • Who might be harmed?
  • What form could harm take?
  • What resources are needed to prevent unethical outcomes?

This approach ensures that human judgment is part of the decision-making process and that the ethical use of AI tools is systematically monitored.6

AI ethics in the workplace

Research at IEAI examines how organizations integrate AI ethics into workplace practices. Surveys and case studies reveal that employees are often exposed to AI programs without clear guidance on their ethical aspects. The findings emphasize the importance of internal ethics guidelines and education programs that enable data scientists and managers to address ethical issues effectively.

Workplace research also highlights the importance of embedding accountability in everyday human interaction with AI applications. By promoting AI ethics in organizational settings, companies can reduce ethical challenges related to bias, transparency, and human decision-making.7

Autonomous driving ethics

The Autonomous Driving Ethics (ANDRE) project at IEAI examines the ethical implications of self-driving cars. The project develops criteria for decision-making in traffic situations, with a focus on risk distribution and harm minimization. Rather than relying on hypothetical dilemmas, the research models realistic scenarios where AI algorithms must balance safety for all parties.

This work provides an ethical framework for intelligent systems in transportation. It also contributes to discussions on AI regulation, since autonomous vehicles raise questions about responsibility, liability, and the role of human oversight in controlling smart machines.8

Accountability framework for AI systems

IEAI’s accountability research addresses how duties and responsibilities can be defined for complex AI systems. The framework outlines who is accountable, for what actions, toward whom, and how explanations should be provided. It emphasizes the importance of transparency for both private companies and government regulation.

In practice, this means that AI development must include mechanisms for auditability, clear documentation of decision-making processes, and consideration of ethical implications for potential risks. The framework has been applied to sectors such as financial services and autonomous vehicles, where accountability is critical to public trust and legally compliant use of AI technologies.9

Center for Human-Compatible Artificial Intelligence (CHAI)

CHAI focuses on aligning artificial intelligence with human values to ensure that AI development remains beneficial and ethical. One of its core research areas is value alignment and human-robot cooperation, where systems are designed to defer to human judgment in uncertain situations.

The center also conducts projects on inverse reinforcement learning to model human preferences and improve how AI systems interpret ethical considerations. CHAI regularly publishes research papers that influence the broader field of AI safety and collaborates with policymakers to integrate its findings into governance debates.10

AI Now Institute

The AI Now Institute researches the impact of artificial intelligence on power, labor, markets, and society. Its work addresses accountability standards, the role of government and industry in shaping AI policy, and the dominance of big tech companies in AI-driven markets.

The institute also examines workers’ data rights, the environmental impact of large-scale AI systems, and the safety risks associated with deploying AI in critical areas.

Through these research areas, AI Now aims to promote transparency, fairness, and the development of AI that serves the public interest rather than narrow commercial goals.11

AlgorithmWatch

AlgorithmWatch investigates and documents the effects of algorithmic decision-making on society. One of its most recognized projects is Automating Society, a recurring report that surveys the use of AI systems and algorithms across Europe.

It also leads the AI Ethics Guidelines Global Inventory, which collects and reviews ethics guidelines from around the world to make them accessible for analysis and comparison. In addition, AlgorithmWatch runs projects on platform accountability, examining how social media algorithms shape public debate and influence democratic processes.12

Philosophical and societal dimensions of AI ethics

AI raises questions that go beyond organizational governance. Ethical challenges arise when AI algorithms influence areas such as criminal justice, economic policy, or autonomous vehicles.

  • Machine ethics explores whether intelligent systems can apply moral principles in human decision-making.
  • Friendly AI research examines how AI development can align with human values to prevent potential risks.
  • Existential risks involve concerns that advanced AI models could pose a threat to civil liberties and human life if they evolve beyond human control.
  • Autonomous systems such as self-driving cars and lethal military technologies highlight ethical questions about delegating human judgment to AI systems.

Generative AI raises new ethical concerns regarding misinformation, intellectual property, and the quality of training data:

  • SAP and IBM have both updated their approaches to address these challenges.
  • UNESCO promotes international collaboration and observatories to track the ethical implications of AI applications across countries.
  • The European Union advances AI regulation through policies that ensure consistent standards across member states.

Future progress in AI ethics will depend on collaboration between business leaders, AI researchers, government officials, and civil society. By embedding moral principles and ethics guidelines into AI development, organizations can reduce potential risks and create trustworthy AI that supports human dignity and responsible AI innovation.

Principal Analyst
Cem Dilmegani
Cem Dilmegani
Principal Analyst
Cem has been the principal analyst at AIMultiple since 2017. AIMultiple informs hundreds of thousands of businesses (as per similarWeb) including 55% of Fortune 500 every month.

Cem's work has been cited by leading global publications including Business Insider, Forbes, Washington Post, global firms like Deloitte, HPE and NGOs like World Economic Forum and supranational organizations like European Commission. You can see more reputable companies and resources that referenced AIMultiple.

Throughout his career, Cem served as a tech consultant, tech buyer and tech entrepreneur. He advised enterprises on their technology decisions at McKinsey & Company and Altman Solon for more than a decade. He also published a McKinsey report on digitalization.

He led technology strategy and procurement of a telco while reporting to the CEO. He has also led commercial growth of deep tech company Hypatos that reached a 7 digit annual recurring revenue and a 9 digit valuation from 0 within 2 years. Cem's work in Hypatos was covered by leading technology publications like TechCrunch and Business Insider.

Cem regularly speaks at international technology conferences. He graduated from Bogazici University as a computer engineer and holds an MBA from Columbia Business School.
View Full Profile
Researched by
Sıla Ermut
Sıla Ermut
Industry Analyst
Sıla Ermut is an industry analyst at AIMultiple focused on email marketing and sales videos. She previously worked as a recruiter in project management and consulting firms. Sıla holds a Master of Science degree in Social Psychology and a Bachelor of Arts degree in International Relations.
View Full Profile

Be the first to comment

Your email address will not be published. All fields are required.

0/450