Though artificial intelligence is changing how businesses work, there are concerns about how it may influence our lives. This is not just an academic or societal problem, but a reputational risk for companies; no company wants to be marred by data or AI ethics scandals that damage its reputation.
Explore insights into ethical issues that arise with the use of AI, examples of misuse of AI, and the 4 key principles of AI.
Algorithmic bias
Algorithms and training data may contain biases, as humans do, since humans also generate those. These biases prevent AI systems from making fair decisions. We encounter biases in AI systems due to two reasons:
- Developers may program biased AI systems without even noticing
- Historical data used to train AI algorithms may not be sufficient to represent the entire population fairly.
Real-life example:
Large language models (LLMs) are increasingly used in workplaces to improve efficiency and fairness, but they may also reproduce or amplify social biases. The Silicon Ceiling study examines the impact of LLMs on hiring by auditing race and gender bias in OpenAI’s GPT-3.5, drawing on traditional resume audit methods.
Researchers conduct two studies using names associated with different races and genders: resume evaluation and resume generation. In Study 1, GPT scores resumes with varied names across multiple occupations and evaluation criteria, revealing stereotype-based biases. In Study 2, GPT generates fictitious resumes, showing systematic differences: women’s resumes reflect less experience, while Asian and Hispanic resumes include immigrant markers.
These findings add to evidence of bias in LLMs, particularly in hiring contexts.1
To build an ethical & responsible AI, eliminating biases in AI systems is necessary. Yet, only 47% of organizations test for bias in data, models, and human use of algorithms.2
Though eliminating all biases in AI systems is almost impossible, given the numerous existing human biases and the ongoing discovery of new ones, minimizing them can be a business’s goal.
Autonomous things
Autonomous Things (AuT) are devices and machines that perform specific tasks autonomously, without human intervention. These machines include self-driving cars, drones, and robotics. Since robot ethics is a broad topic, we focus on unethical issues arising from the use of self-driving vehicles and drones.
Self-driving cars
The autonomous vehicles market was valued at $54 billion in 2019 and is projected to reach $557 billion by 2026.3 However, autonomous vehicles pose various risks to AI ethics guidelines. People and governments still question the liability and accountability of autonomous vehicles.
Real-life example:
For example, in 2018, an Uber self-driving car hit a pedestrian who later died at a hospital.4 The accident was recorded as the first death involving a self-driving car.
After the investigation by the Arizona Police Department and the US National Transportation Safety Board (NTSB), prosecutors have decided that the company is not criminally liable for the pedestrian’s death. This is because the safety driver was distracted by her cell phone, and police reports label the accident as “completely avoidable.”
Lethal Autonomous Weapons (LAWs)
LAWs (Lethal Autonomous Weapons) are AI-powered weapons that can identify and engage targets on their own based on programmed rules. Such systems have existed for decades, particularly in defensive applications like mines, missile defense, sentry systems, and loitering munitions.
More recent platforms include land and sea vehicles with autonomous capabilities, mainly for reconnaissance but sometimes with offensive functions.
Real-life example:
While artificial intelligence is not required for autonomy, it can enhance autonomous systems by enabling adaptive decision-making or assisting human operators. The United Nations has consistently opposed LAWS: since 2018, Secretary-General António Guterres has called them politically unacceptable and morally repugnant, urging their prohibition.
In 2023, he reiterated the need for a legally binding international instrument to ban fully autonomous weapons and regulate others, citing serious humanitarian, legal, and human rights risks. UN human rights experts have echoed these concerns and supported a global ban.5
Unemployment and income inequality due to automation
AI-driven automation is expected to significantly reshape labor markets, contributing to short-term unemployment pressures and widening income inequality if left unmanaged.
Current projections suggest that 15-25% of jobs will face significant disruption by 2025-2027, with 5-10% net job displacement after new roles are created.
At the same time, AI complements human labor in areas such as decision-making, reasoning, and creativity, shifting demand toward higher-value skills. With over 40% of workers needing substantial upskilling by 2030, unequal access to retraining risks deepening income inequality between those who can adapt to AI-enabled roles and those who cannot. Read AI job loss for more predictions on the effect of AI on the current job market.
Misuses of AI
Surveillance practices limiting privacy
“Big Brother is watching you.” This was a quote from George Orwell’s dystopian social science fiction novel called 1984. Though it was written as science fiction, it may have become a reality as governments deploy AI for mass surveillance. The implementation of facial recognition technology in surveillance systems concerns privacy rights.
According to the AI Global Surveillance (AIGS) Index, 176 countries are using AI surveillance systems, and liberal democracies are major users of AI surveillance.6
The same study shows that 51% of advanced democracies deploy AI surveillance systems compared to 37% of closed autocratic states. However, this is likely due to the wealth gap between these 2 groups of countries.
From an ethical perspective, the important question is whether governments are abusing the technology or using it lawfully. “Orwellian” surveillance methods are against human rights.
Real-life examples:
Some tech giants also state ethical concerns about AI-powered surveillance. For example, Microsoft President Brad Smith published a blog post calling for government regulation of facial recognition.7
Also, IBM stopped offering the technology for mass surveillance due to its potential for misuse, such as racial profiling, which violates fundamental human rights.8
Manipulation of human judgment
AI-powered analytics can provide actionable insights into human behavior, yet abusing analytics to manipulate human decisions is ethically wrong.
Real-life example:
Cambridge Analytica sold American voters’ data crawled on Facebook to political campaigns and provided assistance and analytics to the 2016 presidential campaigns of Ted Cruz and Donald Trump.
Information about the data breach was disclosed in 2018, and the Federal Trade Commission fined Facebook $5 billion due to its privacy violations.9
Proliferation of deepfakes
Deepfakes are synthetically generated images or videos in which a person in a media image or video is replaced with someone else’s likeness.
Though about 96% of deepfakes are pornographic videos with over 134 million views on the top four deepfake pornographic websites, the real danger and ethical concerns of society about deepfakes are how they can be used to misrepresent political leaders’ speeches.10
Creating a false narrative using deepfakes can harm people’s trust in the media (which is already at an all-time low).11 This mistrust is dangerous for societies, considering mass media is still the number one option of governments to inform people about emergency events (e.g., pandemic).
Real-life example:
Militant and extremist groups are increasingly experimenting with artificial intelligence, raising growing security concerns. Experts warn that AI can help these groups scale recruitment, propaganda, deepfakes, and cyberattacks, even with limited resources. Many groups have already used AI-generated images, videos, audio, and translations to spread disinformation and recruit new members.
While their use of advanced AI remains limited compared to state actors, the risks are expected to increase as AI tools become cheaper and more accessible. Governments and lawmakers are pushing for stronger oversight, information-sharing with AI companies, and regular threat assessments to counter the malicious use of AI by extremist groups.
Artificial general intelligence (AGI) / Singularity
Our analysis of over 8,500 predictions from scientists, entrepreneurs, and the wider community suggests that most experts view AGI as inevitable. Building on this belief, recent surveys of AI researchers estimate its arrival around 2040, a notable shift from earlier forecasts closer to 2060, while entrepreneurs are even more optimistic, projecting timelines near 2030.
The path to AGI remains uncertain, with no scientific consensus on whether it will emerge by scaling current architectures, such as transformers, or by developing fundamentally new approaches, nor on how AGI should ultimately be validated.
The prospect of the Singularity raises ethical concerns about the value of human life as machines surpass human intelligence.
Practical dilemmas, such as how self-driving cars should choose between harming passengers or pedestrians, highlight unresolved moral questions that must be addressed before widespread deployment. More broadly, the rise of superintelligent systems challenges human dominance and raises questions about the rights, responsibilities, and moral frameworks of artificial beings.
Robot ethics
Robot ethics, or roboethics, deals with how humans design, use, and treat robots. Debates on this topic have existed since the 1940s, mainly questioning whether robots should have rights comparable to those of humans and animals. As AI capabilities grow, these questions are becoming more important. Institutes like AI Now are now dedicated to exploring these issues with academic rigor.12
Author Isaac Asimov is the first one to talk about laws for robots in his short story called “Runaround”. He introduced the Three Laws of Robotics:13
- A robot may not injure a human being or, through inaction, allow a human being to come to harm.
- A robot must obey the orders given by human beings except where such orders would conflict with the First Law.
- A robot must protect its existence as long as such protection does not conflict with the First or Second Law.
Generative AI-specific ethical concerns
The generative AI ethics is relatively new and has gained attention with the release of various generative models, particularly ChatGPT by OpenAI. ChatGPT quickly gained popularity for its ability to create authentic content across a wide range of subjects. With that, it also brought some genuine ethical concerns.
Truthfulness & accuracy
Generative AI employs machine learning techniques to generate new information, which may result in inaccuracies, also known as hallucinations.
Recently, language models have become more adept at speaking persuasively and eloquently. However, this advanced proficiency has also brought the potential to spread false information or even create false statements.
Figure 1: Trustworthy generative foundation models from 2022 to 2025.14
Real-life example:
Arve Hjalmar Holmen, who has no criminal history or public profile, prompted ChatGPT to describe who he was and received a completely fabricated and defamatory response. Holmen filed a formal complaint against the company behind ChatGPT after the chatbot falsely stated that he had murdered two of his children.
Holmen argues that the false claim could seriously harm his reputation and personal life, highlighting broader concerns about the reliability of generative AI systems. The risks of AI hallucinations, particularly when chatbots generate false information about real people, raise questions about accountability, accuracy, and safeguards in widely used AI tools.15
Copyright ambiguities
Another ethical consideration with generative AI is the uncertainty surrounding the authorship and copyright of content it creates. This raises questions about who holds the rights to such works and how they can be utilized. The issue of copyright centers around three main questions:
- Should works created by AI be eligible for copyright protection?
- Who would have the ownership rights over the created content?
- Can copyrighted-generated data be used for training purposes?
How to navigate these dilemmas?
These are hard questions, and innovative, controversial solutions like the universal basic income may be necessary to address them. There are numerous initiatives and organizations aimed at minimizing the potential negative impact of AI.
For instance, the Institute for Ethics in Artificial Intelligence (IEAI) at the Technical University of Munich conducts AI research across various domains such as mobility, employment, healthcare, and sustainability.
Consider UNESCO policies & best practices
Here is a list of key UNESCO policies:
Data Governance Policy
This policy emphasizes the importance of robust frameworks for data collection, use, and governance to ensure individual privacy and mitigate risks. It encourages creating quality datasets for AI training, adopting open and trustworthy datasets, and implementing effective data protection strategies.
For example, establishing standardized datasets for healthcare AI ensures accuracy and reduces biases.
Ethical AI Governance
Governance mechanisms must be inclusive, multidisciplinary, and multilateral, incorporating diverse stakeholders like affected communities, policymakers, and AI experts. This approach extends to enforcing accountability and providing redress for harms.
For instance, ensuring fair hiring of AI systems requires ongoing audits to address bias.
Education and research policy
It promotes AI literacy and ethical awareness by integrating AI and data education into curricula. It also prioritizes marginalized groups’ participation and advances ethical AI research.
For example, schools could teach AI basics alongside coding and critical thinking, equipping future generations to navigate AI’s societal impacts.
Health and social well-being
This policy encourages the deployment of AI to improve healthcare, address global health risks, and advance mental health. It highlights the need for AI applications that are medically proven, safe, and efficient.
Gender equality in AI
Gender quality aims to reduce gender disparities in AI by supporting women in STEM fields and avoiding biases in AI systems.
For example, allocating funds to mentor women in AI research and addressing gender biases in job recruitment algorithms.
Environmental sustainability
This policy focuses on assessing and mitigating the environmental impact of AI, such as its carbon footprint and resource consumption. Incentivizing the use of AI in climate prediction and mitigation is encouraged.
For instance, deploying AI to monitor deforestation and optimize renewable energy grids.
Readiness assessment methodology (RAM)
This technique helps member states evaluate their preparedness to implement ethical AI policies by assessing legal frameworks, infrastructure, and resource availability.
For example, RAM can identify gaps in AI regulation and infrastructure, guiding nations toward ethical AI adoption.
Ethical impact assessment (EIA)
This method assesses the potential social, environmental, and economic impacts of AI projects. Collaborating with affected communities, EIA ensures resource allocation to prevent harm.
For instance, an EIA could identify the risks of bias in a predictive policing system and recommend mitigations.
Global observatory on AI ethics
It refers to a digital platform that offers analyses of AI’s ethical challenges and monitors the global implementation of UNESCO recommendations.
For example, the observatory could provide reports on AI’s societal impacts in various countries.
AI ethics training and public awareness
This approach encourages accessible education and civic engagement to enhance public understanding of AI ethics.
For instance, campaigns to educate users about privacy risks in AI-powered social media platforms can build informed digital citizens.
Figure 2: UNESCO ethical AI policy areas16
Best practices recommended by UNESCO:
- Inclusive and Multi-Stakeholder Governance
- Involve diverse stakeholders, including marginalized communities, in policy creation and AI governance.
- Use multidisciplinary teams to ensure decisions are well-rounded and equitable.
- Example: Holding public consultations when deploying surveillance AI systems.
- Transparency and Explainability
- Develop AI systems with interpretable decision-making processes.
- Balance transparency with safety and privacy concerns.
- Example: Providing users with plain-language explanations of how an AI model makes decisions.
- Sustainability Assessments
- Regularly evaluate AI systems for their environmental impact, including energy consumption and carbon footprint.
- Example: Reducing energy use in training large machine-learning models.
- AI literacy programs
- Educate the public and policymakers on AI’s ethical implications.
- Incorporate AI ethics into educational curricula at all levels.
- Example: Workshops on privacy risks in AI-powered social media.
- Ongoing audits and accountability mechanisms
- Establish regular audits for AI systems to detect and address biases, inaccuracies, or ethical breaches.
- Ensure there is a clear process for redress in cases of harm caused by AI.
- Example: Periodic reviews of AI recruitment tools to prevent gender bias.
Check out responsible AI frameworks
Here are some responsible AI frameworks to overcome ethical dilemmas like AI bias:
Transparency
AI developers have an ethical obligation to be transparent in a structured, accessible way since AI technology has the potential to break laws and negatively impact the human experience. To make AI accessible and transparent, knowledge sharing can help. Some initiatives are:
- AI research, even if it takes place in private, for-profit companies, tends to be publicly shared
- OpenAI is a non-profit AI research company created by Elon Musk, Sam Altman, and others to develop open-source AI beneficial to humanity. However, by selling one of its exclusive models to Microsoft rather than releasing the source code, OpenAI has reduced its level of transparency.
- Google developed TensorFlow, a widely used open-source machine learning library, to facilitate the adoption of AI.
- AI researchers Ben Goertzel and David Hart created OpenCog as an open-source framework for AI development.
- Google (and other tech giants) has an AI-specific blog17 that enables it to share its AI knowledge with the world.
Explainability
AI developers and businesses need to explain how their algorithms arrive at their predictions to overcome ethical issues that arise with inaccurate predictions. Various technical approaches can explain how these algorithms reach their conclusions and which factors affect their decisions.
Fairness and inclusiveness
AI research tends to be done by male researchers in wealthy countries. This contributes to the biases in AI models. The increasing diversity of the AI community is key to improving model quality and reducing bias.
This can help solve problems such as unemployment and discrimination, which can be caused by automated decision-making systems.
Alignment
Numerous countries, companies, and universities are building AI systems, and in most areas, there is no legal framework adapted to the recent developments in AI. Modernizing legal frameworks at both the country and higher levels (e.g., UN) will clarify the path to ethical AI development. Pioneering companies should spearhead these efforts to create clarity for their industry.
Use AI ethics frameworks and tools
Academics and organizations are increasingly focusing on ethical frameworks to guide the use of AI technologies. These frameworks address the moral implications of AI across its lifecycle, including training AI systems, developing AI models, and deploying intelligent systems.
Here is a list of tools that can help you apply AI ethics practices:
AI Governance Tools
AI governance tools ensure that AI applications are developed and deployed in alignment with ethical principles. These tools help organizations monitor and control AI programs throughout the AI lifecycle, address risks related to unethical outcomes, and support trustworthy AI.
By implementing robust AI governance practices, companies can better manage potential risks and achieve AI compliance with regulatory bodies.
Responsible AI
Responsible AI tools align AI technologies with moral principles and human values. These initiatives ensure that AI systems respect human dignity and protect civil liberties while considering the societal impact of new technologies.
Private companies are increasingly adopting responsible AI practices to address ethical challenges and mitigate security risks.
LLMOps
LLMOps tools refer to operational practices surrounding large language models. As AI technologies become more sophisticated, the need for specialized tools to manage these models has grown. LLMOps focuses on maintaining the ethical use of large language models, ensuring they do not perpetuate existing inequalities or contribute to issues like deep fakes.
MLOps
MLOps tools (Machine Learning Operations) involve integrating AI models into production while ensuring alignment with ethical standards. This practice emphasizes human oversight of autonomous systems, particularly in critical areas such as healthcare and criminal justice. MLOps helps organizations manage the societal impact and consequences of intelligent systems.
Data Governance
Data governance is crucial for the ethical use of AI, involving responsible data management that trains AI systems. Effective data governance ensures data protection and considers the social implications of data usage, supporting ethical considerations across the AI lifecycle. This is particularly important as big tech companies shape the future of AI technologies.
FAQs
Reference Links
Cem's work has been cited by leading global publications including Business Insider, Forbes, Washington Post, global firms like Deloitte, HPE and NGOs like World Economic Forum and supranational organizations like European Commission. You can see more reputable companies and resources that referenced AIMultiple.
Throughout his career, Cem served as a tech consultant, tech buyer and tech entrepreneur. He advised enterprises on their technology decisions at McKinsey & Company and Altman Solon for more than a decade. He also published a McKinsey report on digitalization.
He led technology strategy and procurement of a telco while reporting to the CEO. He has also led commercial growth of deep tech company Hypatos that reached a 7 digit annual recurring revenue and a 9 digit valuation from 0 within 2 years. Cem's work in Hypatos was covered by leading technology publications like TechCrunch and Business Insider.
Cem regularly speaks at international technology conferences. He graduated from Bogazici University as a computer engineer and holds an MBA from Columbia Business School.
Be the first to comment
Your email address will not be published. All fields are required.