Though artificial intelligence is changing how businesses work, there are concerns about how it may influence our lives. This is not just an academic or a societal concern but a reputational risk for companies, no company wants to be marred with data or AI ethics scandals that impact companies.
This article provides insights on ethical issues that arise with the use of AI, examples from misuses of AI, and 4 key principles of AI.
What is AI ethics?
AI ethics is the study of the moral principles guiding the design, development, and deployment of artificial intelligence. It addresses issues like fairness, transparency, privacy, and accountability to ensure AI systems benefit society, avoid harm, and respect human rights while mitigating biases and unintended consequences.
What are the ethical dilemmas of artificial intelligence?
Algorithmic bias
Al algorithms and training data may contain biases as humans do since those are also generated by humans. These biases prevent AI systems from making fair decisions. We encounter biases in AI systems due to two reasons
- Developers may program biased AI systems without even noticing
- Historical data that will train AI algorithms may not be enough to represent the whole population fairly.
Real-life example
Biased AI algorithms may lead to discrimination of minority groups. For instance, Amazon shut down its AI recruiting tool after using it for one year.1 Developers in Amazon state that the tool was penalizing women. About 60% of the candidates the AI tool chose were male, which was due to patterns in data on Amazon’s historical recruitments.
The Dutch childcare benefits scandal, or “toeslagenaffaire,” occurred in 2013 when tax authorities used an algorithm to flag potential fraud based on factors like dual nationality and low income. This resulted in thousands of families being wrongly accused and facing debts over €100,000. Victims experienced severe emotional distress, and more than 1,000 children were placed in foster care. 2
The scandal, exposed in 2019, revealed systemic bias and a lack of oversight. In response, the Dutch government resigned, and a new algorithm regulator was proposed. This incident raised concerns about the dangers of algorithmic decision-making in the public sector.
To build an ethical & responsible AI, getting rid of biases in AI systems is necessary. Yet, only 47% of organizations test for bias in data, models, and human use of algorithms.3
Though getting rid of all biases in AI systems is almost impossible due to the existing numerous human biases and ongoing identification of new biases, minimizing them can be a business’s goal.
Autonomous things
Autonomous Things (AuT) are devices and machines that work on specific tasks autonomously without human interaction. These machines include self-driving cars, drones, and robotics. Since robot ethics is a broad topic, we focus on unethical issues that arise due to the use of self-driving vehicles and drones.
Self-driving cars
The autonomous vehicles market was valued at $54 billion in 2019 and is projected to reach $557 billion by 2026.4 However, autonomous vehicles pose various risks to AI ethics guidelines. People and governments still question the liability and accountability of autonomous vehicles.
Real-life example
For example, in 2018, an Uber self-driving car hit a pedestrian who later died at a hospital.5 The accident was recorded as the first death involving a self-driving car.
After the investigation by the Arizona Police Department and the US National Transportation Safety Board (NTSB), prosecutors have decided that the company is not criminally liable for the pedestrian’s death. This is because the safety driver was distracted with her cell phone, and police reports label the accident as “completely avoidable.”
Lethal Autonomous Weapons (LAWs)
LAWs (Lethal Autonomous Weapons) are AI-powered weapons that can identify and engage targets on their own based on programmed rules. There has been debate over the ethics of using AI in military weapons. In 2018, the United Nations held discussions on the topic, with countries like South Korea, Russia, and the U.S. supporting the use of LAWs, while others raised concerns.
Real-life example
Counterarguments for the usage of LAWs are widely shared by non-governmental communities. For instance, a community called Campaign to Stop Killer Robots wrote a letter to warn about the threat of an artificial intelligence arms race. Some renowned faces such as Stephen Hawking, Elon Musk, Steve Wozniak, Noam Chomsky, Jaan Tallinn, and Demis Hassabis also signed the letter.
Unemployment and income inequality due to automation
This is currently the greatest fear against AI. According to a CNBC survey, 27% of US citizens believe that AI will eliminate their jobs within five years.6 The percentage increases to 37% for citizens whose age is between 18-24.
Though these numbers may not look huge for “the greatest AI fear”, don’t forget that this is just a prediction for the upcoming five years.
According to Mckinsey estimates, intelligent agents and robots could replace as much as 30% of the world’s current human labor by 2030. Depending upon various adoption scenarios, automation will displace between 400 and 800 million jobs, requiring as many as 375 million people to entirely switch job categories.
Comparing society’s 5-year expectations and McKinsey’s forecast for 10 years shows that people’s expectations of unemployment are more pronounced than industry experts’ estimates. However, both point to a significant share of the population being unemployed due to advances in AI.
Another concern about the impacts of AI-driven automation is rising income inequality. A study found that automation has reduced or degraded the wages of US workers specialized in routine tasks by 50% to 70% since 1980.
Misuses of AI
Surveillance practices limiting privacy
“Big Brother is watching you.” This was a quote from George Orwell’s dystopian social science fiction novel called 1984. Though it was written as science fiction, it may have become a reality as governments deploy AI for mass surveillance. Implementation of facial recognition technology into surveillance systems concerns privacy rights.
According to AI Global Surveillance (AIGS) Index, 176 countries are using AI surveillance systems and liberal democracies are major users of AI surveillance.7 The same study shows that 51% of advanced democracies deploy AI surveillance systems compared to 37% of closed autocratic states. However, this is likely due to the wealth gap between these 2 groups of countries.
From an ethical perspective, the important question is whether governments are abusing the technology or using it lawfully. “Orwellian” surveillance methods are against human rights.
Real-life examples
Some tech giants also state ethical concerns about AI-powered surveillance. For example, Microsoft President Brad Smith published a blog post calling for government regulation of facial recognition.8
Also, IBM stopped offering the technology for mass surveillance due to its potential for misuse, such as racial profiling, which violates fundamental human rights.9
Manipulation of human judgment
AI-powered analytics can provide actionable insights into human behavior, yet, abusing analytics to manipulate human decisions is ethically wrong. The best-known example of misuse of analytics is the data scandal by Facebook and Cambridge Analytica.10
Real-life example
Cambridge Analytica sold American voters’ data crawled on Facebook to political campaigns and provided assistance and analytics to the 2016 presidential campaigns of Ted Cruz and Donald Trump. Information about the data breach was disclosed in 2018, and the Federal Trade Commission fined Facebook $5 billion due to its privacy violations.11
Proliferation of deepfakes
Deepfakes are synthetically generated images or videos in which a person in a media is replaced with someone else’s likeness.
Though about 96 % of deepfakes are pornographic videos with over 134 million views on the top four deepfake pornographic websites, the real danger and ethical concerns of society about deepfakes are how they can be used to misrepresent political leaders’ speeches.12
Creating a false narrative using deepfakes can harm people’s trust in the media (which is already at an all time low).13 This mistrust is dangerous for societies considering mass media is still the number one option of governments to inform people about emergency events (e.g., pandemic).
Artificial general intelligence (AGI) / Singularity
A machine capable of human-level understanding could be a threat to humanity and such research may need to be regulated. Although most AI experts don’t expect a singularity (AGI) any time soon (before 2060), as AI capabilities increase, this is an important topic from an ethical perspective.
When people talk about AI, they mostly mean narrow AI systems, also referred to as weak AI, which is specified to handle a singular or limited task. On the other hand, AGI is the form of artificial intelligence that we see in science fiction books and movies. AGI means machines can understand or learn any intellectual task that a human being can.
Robot ethics
Robot ethics, or roboethics, deals with how humans design, use, and treat robots. Debates on this topic have existed since the 1940s, mainly questioning whether robots should have rights like humans and animals. As AI capabilities grow, these questions are becoming more important. Institutes like AI Now are now dedicated to exploring these issues with academic rigor.
Author Isaac Asimov is the first one who talk about laws for robots in his short story called “Runaround”. He introduced Three Laws of Robotics14 :
- A robot may not injure a human being or, through inaction, allow a human being to come to harm.
- A robot must obey the orders given by human beings except where such orders would conflict with the First Law.
- A robot must protect its existence as long as such protection does not conflict with the First or Second Law.
Generative AI-specific ethical concerns
The generative AI ethics is relatively new and has gained attention with the release of various generative models, particularly ChatGPT by OpenAI. ChatGPT quickly gained popularity due to its ability to create authentic content on a wide range of subjects. With that, it also brought some genuine ethical concerns as well.
Truthfulness & Accuracy
Generative AI employs machine learning techniques to generate new information, which may result in inaccuracies (see Figure 1). Additionally, pre-trained language models like ChatGPT cannot update and adapt to new information.
Recently, language models have become more skilled in their ability to speak persuasively and eloquently. However, this advanced proficiency has also brought the potential to spread false information or even create false statements.
Figure 1. On average, most generative models are truthful only 25% of the time, according to the TruthfulQA benchmark test.

Source15 : Stanford University Artificial Intelligence Index Report 2022
Copyright ambiguities
Another ethical consideration with generative AI is the uncertainty surrounding the authorship and copyright of content created by the AI. This raises questions about who holds the rights to such works and how they can be utilized. The issue of copyright centers around three main questions:
- Are works created by AI should be eligible for copyright protection?
- Who would have the ownership rights over the created content?
- Can copyrighted-generated data be used for training purposes?
Explore more on what are two key concerns associated with the intellectual property (ip) rights of content in the context of generative AI.
Misuse of generative AI
- Education: Generative AI has the potential to be misused by creating false or inaccurate information that is presented as true. This could result in students receiving incorrect information or being misled. Also, students can use generative AI tools like ChatGPT to prepare their homework on a wide range of subjects.
How to navigate these dilemmas?
These are hard questions and innovative and controversial solutions like the universal basic income may be necessary to solve them. There are numerous initiatives and organizations aimed at minimizing the potential negative impact of AI.
For instance, the Institute for Ethics in Artificial Intelligence (IEAI) at the Technical University of Munich conducts AI research across various domains such as mobility, employment, healthcare, and sustainability.
Consider UNESCO policies & best practices
Here is a list of key UNESCO policies:
Data Governance Policy
This policy emphasizes the importance of robust frameworks for data collection, use, and governance to ensure individual privacy and mitigate risks. It encourages creating quality datasets for AI training, adopting open and trustworthy datasets, and implementing effective data protection strategies.
For example, establishing standardized datasets for healthcare AI ensures accuracy and reduces biases.
Ethical AI Governance
Governance mechanisms must be inclusive, multidisciplinary, and multilateral, incorporating diverse stakeholders like affected communities, policymakers, and AI experts. This approach extends to enforcing accountability and providing redress for harms.
For instance, ensuring fair hiring AI systems requires ongoing audits to address bias.
Education and research policy
It promotes AI literacy and ethical awareness by integrating AI and data education into curricula. It also prioritizes marginalized groups’ participation and advances ethical AI research.
For example, schools could teach AI basics alongside coding and critical thinking, equipping future generations to navigate AI’s societal impacts.
Health and social wellbeing
This policy encourages deploying AI to improve healthcare, address global health risks, and advance mental health. It highlights the need for medically proven, safe, and efficient AI applications.
For instance, AI-driven diagnostic tools for diseases like diabetes should undergo rigorous testing to ensure reliability.
Gender equality in AI
Gender quality aims to reduce gender disparities in AI by supporting women in STEM fields and avoiding biases in AI systems.
For example, allocating funds to mentor women in AI research and addressing gender biases in job recruitment algorithms.
Environmental sustainability
This policy focuses on assessing and mitigating the environmental impact of AI, such as its carbon footprint and resource consumption. Incentivizing the use of AI in climate prediction and mitigation is encouraged.
For instance, deploying AI to monitor deforestation and optimize renewable energy grids.
Readiness assessment methodology (RAM)
This technique helps member states evaluate their preparedness to implement ethical AI policies by assessing legal frameworks, infrastructure, and resource availability.
For example, RAM can identify gaps in AI regulation and infrastructure, guiding nations toward ethical AI adoption.
Ethical impact assessment (EIA)
This method assesses the potential social, environmental, and economic impacts of AI projects. Collaborating with affected communities, EIA ensures resource allocation for harm prevention.
For instance, an EIA could identify the risks of bias in a predictive policing system and recommend mitigations.
Global observatory on AI ethics
It refers to a digital platform offering analyses of AI’s ethical challenges and monitoring global implementation of UNESCO recommendations.
For example, the observatory could provide reports on AI’s societal impacts in various countries.
AI ethics training and public awareness
This approach encourages accessible education and civic engagement to enhance public understanding of AI ethics.
For instance, campaigns to educate users about privacy risks in AI-powered social media platforms can build informed digital citizens.

Best practices recommended by UNESCO:
- Inclusive and Multi-Stakeholder Governance
- Involve diverse stakeholders, including marginalized communities, in policy creation and AI governance.
- Use multidisciplinary teams to ensure decisions are well-rounded and equitable.
- Example: Holding public consultations when deploying surveillance AI systems.
- Transparency and Explainability
- Develop AI systems with interpretable decision-making processes.
- Balance transparency with safety and privacy concerns.
- Example: Providing users with plain-language explanations of how an AI model makes decisions.
- Sustainability Assessments
- Regularly evaluate AI systems for their environmental impact, including energy consumption and carbon footprint.
- Example: Reducing energy use in training large machine-learning models.
- AI literacy programs
- Educate the public and policymakers on AI’s ethical implications.
- Incorporate AI ethics into educational curricula at all levels.
- Example: Workshops on privacy risks in AI-powered social media.
- Ongoing audits and accountability mechanisms
- Establish regular audits for AI systems to detect and address biases, inaccuracies, or ethical breaches.
- Ensure there is a clear process for redress in cases of harm caused by AI.
- Example: Periodic reviews of AI recruitment tools to prevent gender bias.
Check out responsible AI frameworks
Here are some responsible AI frameworks to overcome ethical dilemmas like AI bias:
Transparency
AI developers have an ethical obligation to be transparent in a structured, accessible way since AI technology has the potential to break laws and negatively impact the human experience. To make AI accessible and transparent, knowledge sharing can help. Some initiatives are:
- AI research even if it takes place in private, for-profit companies, tends to be publicly shared
- OpenAI is a non-profit AI research company created by Elon Musk, Sam Altman, and others to develop open-source AI beneficial to humanity. However, by selling one of its exclusive models to Microsoft rather than releasing the source code, Open AI has reduced its level of transparency.
- Google developed TensorFlow, a widely used open-source machine learning library, to facilitate the adoption of AI.
- AI researchers Ben Goertzel and David Hart, created OpenCog as an open-source framework for AI development
- Google (and other tech giants) has an AI-specific blog that enables them to spread its AI knowledge to the world.
Explainability
AI developers and businesses need to explain how their algorithms arrive at their predictions to overcome ethical issues that arise with inaccurate predictions. Various technical approaches can explain how these algorithms reach these conclusions and what factors affect the decision.
We’ve covered explainable AI before, feel free to check it out.
Fairness and inclusiveness
AI research tends to be done by male researchers in wealthy countries. This contributes to the biases in AI models. The increasing diversity of the AI community is key to improving model quality and reducing bias. There are numerous initiatives like this one supported by Harvard to increase diversity within the community but their impact has so far been limited.
This can help solve problems such as unemployment and discrimination which can be caused by automated decision-making systems.
Alignment
Numerous countries, companies, and universities are building AI systems and in most areas, there is no legal framework adapted to the recent developments in AI. Modernizing legal frameworks at both country and higher levels (e.g. UN) will clarify the path to ethical AI development. Pioneering companies should spearhead these efforts to create clarity for their industry.
Apply other strategies
Implement a data-centric approach
Shift the focus from solely improving AI models to prioritizing the quality, accuracy, and representativeness of the data used for training and testing. This includes regularly cleaning, validating, and updating datasets to minimize biases, improve performance, and ensure compliance with data protection regulations.
Explore more on data-centric approach to AI development.
Build an AI inventory
Create and maintain a comprehensive inventory of all AI systems within your organization. This inventory should include detailed records of each system’s purpose, capabilities, associated risks, data sources, and compliance status. It serves as a critical tool for tracking AI assets, identifying potential gaps in governance, and enabling proactive risk management.
Discover how to build an AI inventory
Establish a cross-functional AI compliance team
Create a team of legal, technical, data governance, and operational experts to ensure AI compliance risks are addressed. They should align AI systems with global regulations like the EU AI Act, maintain ethical standards, transparency, and accountability. Regular audits and updates will help adapt to new requirements.
Use AI ethics frameworks and tools
Academics and organizations are increasingly focusing on ethical frameworks to guide the use of AI technologies. These frameworks address the ethical implications of AI across its lifecycle, including training AI systems, developing AI models, and deploying intelligent systems.
One notable example is the hourglass model, which illustrates the interaction between organizations, AI systems, and the broader environment.17 It also provides a comprehensive task list for those aiming to promote AI ethics within their organizations.18
Figure 3: AI ethics frameworks

Here is a list of tools that can help you apply AI ethics practices:
AI Governance Tools
AI governance tools ensure that AI applications are developed and deployed in alignment with ethical principles. These tools help organizations monitor and control AI programs throughout the AI lifecycle, addressing risks related to unethical outcomes and supporting trustworthy AI. By implementing robust AI governance practices, companies can better manage potential risks and achieve AI compliance with regulatory bodies.
Responsible AI
Responsible AI tools align AI technologies with moral principles and human values. These initiatives ensure that AI systems respect human dignity and protect civil liberties while considering the societal impact of new technologies. Private companies are increasingly adopting responsible AI practices to address ethical challenges and mitigate security risks.
LLMOps
LLMOps tools refer to operational practices surrounding large language models. As AI technologies become more sophisticated, the need for specialized tools to manage these models has grown. LLMOps focuses on maintaining the ethical use of large language models, ensuring they do not perpetuate existing inequalities or contribute to issues like deep fakes.
MLOps
MLOps tools (Machine Learning Operations) involves integrating AI models into production while ensuring alignment with ethical standards. This practice emphasizes human oversight in autonomous systems, particularly in critical areas like health care and criminal justice. MLOps helps organizations manage the societal impact and consequences of intelligent systems.
Data Governance
Data governance is crucial for the ethical use of AI, involving responsible data management that trains AI systems. Effective data governance ensures data protection and considers the social implications of data usage, supporting ethical considerations across the AI lifecycle. This is particularly important as big tech companies shape the future of AI technologies.
If you are looking for AI vendors, check our data-driven lists of:
FAQs
What is the Unesco recommendation for Ethics of AI?
The UNESCO Recommendation on the Ethics of AI, adopted in November 2021, calls for minimizing discriminatory and biased outcomes in AI systems while promoting fairness, transparency, accountability, and respect for human rights. It emphasizes creating institutional and legal frameworks to govern AI for the public good and outlines concrete policies for data governance, gender equality, and ethical AI use in various sectors. The Recommendation includes mechanisms for monitoring, evaluation, and implementation to drive meaningful change.
External sources
- 1. Amazon scrapped 'sexist AI' tool. BBC News
- 2. AI: Decoded: A Dutch algorithm scandal serves a warning to Europe — The AI Act won’t save us – POLITICO. POLITICO
- 3. How Responsible AI can improve business and preserve value | PwC .
- 4. Allied Market Research | Global Insights, Custom Reports & Expert Consulting for Business Growth. Allied Market Research
- 5. Uber's self-driving operator charged over fatal crash. BBC News
- 6. This is the industry that has some of the happiest workers in America. CNBC
- 7. The Global Expansion of AI Surveillance | Carnegie Endowment for International Peace.
- 8. Facial recognition technology: The need for public regulation and corporate responsibility - Microsoft On the Issues. Microsoft
- 9. IBM will no longer offer, develop, or research facial recognition technology | The Verge. The Verge
- 10. Facebook–Cambridge Analytica data scandal - Wikipedia. Contributors to Wikimedia projects
- 11. FTC Imposes $5 Billion Penalty and Sweeping New Privacy Restrictions on Facebook | Federal Trade Commission.
- 12. Debating the ethics of deepfakes.
- 13. News Media Credibility Rating Falls to a New Low.
- 14. Three Laws of Robotics - Wikipedia. Contributors to Wikimedia projects
- 15. “Artificial Intelligence Index Report 2022.” AI Index. Accessed 22 February 2023.
- 16. “UNESCO’s Recommendation on the Ethics of Artificial Intelligence: key facts.” UNESCO. 2023. Retrieved at November 26, 2024.
- 17. The hourglass model - Artificial Intelligence Governance And Auditing. University of Turku
- 18. List of AI Governance Tasks - Artificial Intelligence Governance And Auditing. University of Turku
Comments
Your email address will not be published. All fields are required.