Though artificial intelligence is changing how businesses work, there are concerns about how it may influence our lives. This is not just an academic or societal problem, but a reputational risk for companies; no company wants to be undermined by data or AI ethics scandals that damage its reputation.
Explore insights into ethical issues that arise with the use of AI, examples of misuse, and the key principles to mitigate these problems.
Algorithmic bias
Algorithms and training data may contain biases, as humans do, since humans also generate those. These biases prevent AI systems from making fair decisions. We encounter biases in AI systems due to two reasons:
- Developers may program biased AI systems without even noticing
- Historical data used to train AI algorithms may not be sufficient to accurately represent the entire population.
Real-life example:
Large language models (LLMs) are increasingly used in workplaces to improve efficiency and fairness, but they may also reproduce or amplify social biases. The Silicon Ceiling study examines the impact of LLMs on hiring by auditing race and gender bias in OpenAI’s GPT-3.5, drawing on traditional resume audit methods.
Researchers conduct two studies using names associated with different races and genders: resume evaluation and resume generation. In Study 1, GPT scores resumes with varied names across multiple occupations and evaluation criteria, revealing stereotype-based biases. In Study 2, GPT generates fictitious resumes, showing systematic differences: women’s resumes reflect less experience, while Asian and Hispanic resumes include immigrant markers.
These findings add to evidence of bias in LLMs, particularly in hiring contexts.1
To build an ethical and responsible AI, eliminating biases in AI systems is necessary. Yet, only 47% of organizations test for bias in data, models, and human use of algorithms.2
Though eliminating all biases in AI systems is almost impossible, given the numerous existing human biases and the ongoing discovery of new ones, minimizing them can be a business’s goal.
Autonomous things
Autonomous Things (AuT) are devices and machines that perform specific tasks without human intervention. These machines include self-driving cars, drones, and robotics. Since robot ethics is a broad topic, we focus on unethical issues arising from the use of self-driving vehicles and drones.
Self-driving cars
The autonomous vehicles market was valued at $54 billion in 2019 and is projected to reach $557 billion by 2026.3 Despite its growing value, autonomous vehicles pose various risks to AI ethics guidelines. The liability and accountability of autonomous vehicles are still a matter of debate.
Real-life example:
For example, in 2018, an Uber self-driving car hit a pedestrian who later died at a hospital.4 The accident was recorded as the first death involving a self-driving car.
After the investigation by the Arizona Police Department and the US National Transportation Safety Board (NTSB), prosecutors have decided that the company is not criminally liable for the pedestrian’s death. This is because the safety driver was distracted by her cell phone, and police reports label the accident as “completely avoidable.”
Lethal Autonomous Weapons (LAWs)
LAWs (Lethal Autonomous Weapons) are AI-powered weapons that can identify and engage targets on their own based on programmed rules. Such systems have existed for decades, particularly in defensive applications like mines, missile defense, sentry systems, and loitering munitions.
More recent platforms include land and sea vehicles with autonomous capabilities, mainly for reconnaissance but sometimes with offensive functions.
Real-life example:
In the Ukraine-Russia conflict, autonomous weapons are mainly used through AI-enabled drones and loitering munitions rather than fully independent systems.
Russia employs loitering munitions, which can autonomously search for and strike pre-defined military targets with minimal human control once launched. Ukraine primarily uses semi-autonomous drones, in which humans authorize attacks while AI assists with navigation, target tracking, and rapid engagement.
These systems increase speed and precision on the battlefield but reduce meaningful human oversight, creating legal and ethical challenges under international humanitarian law, particularly regarding the principles of distinction, proportionality, and accountability.5
Real-life example:
Since 2018, the United Nations has consistently opposed lethal autonomous weapons systems (LAWS). Secretary-General António Guterres has called them politically unacceptable and morally unacceptable, and has urged their prohibition.
In 2023, he reiterated the need for a legally binding international instrument to ban fully autonomous weapons and regulate others, citing serious humanitarian, legal, and human rights risks. UN human rights experts have echoed these concerns and supported a global ban.6
Unemployment and income inequality due to automation
AI-driven automation is expected to significantly reshape labor markets, contributing to short-term unemployment pressures and widening income inequality if left unmanaged.
Current projections suggest that 15-25% of jobs will face significant disruption by 2025-2027, with 5-10% net job displacement after new roles are created.
At the same time, AI complements human labor in areas such as decision-making, reasoning, and creativity, shifting demand toward higher-value skills. With over 40% of workers needing substantial upskilling by 2030, unequal access to retraining risks deepening income inequality between those who can adapt to AI-enabled roles and those who cannot. Read AI job loss for more predictions on the effect of AI on the current job market.
Misuses of AI
Surveillance practices limiting privacy
“Big Brother is watching you.” This famous line from George Orwell’s dystopian novel 1984 was once a work of science fiction. Today, however, it increasingly feels like reality, as governments deploy AI for mass surveillance. In particular, the use of facial recognition technology in surveillance systems has raised serious concerns about privacy rights
According to the AI Global Surveillance (AIGS) Index, 176 countries are using AI surveillance systems, and liberal democracies are major users of AI surveillance.7
The same study shows that 51% of advanced democracies deploy AI surveillance systems compared to 37% of closed autocratic states. However, this is likely due to the wealth gap between these 2 groups of countries.
From an ethical perspective, the important question is whether governments are abusing the technology or using it lawfully.
Real-life examples:
Some tech giants also state ethical concerns about AI-powered surveillance. For example, Microsoft President Brad Smith published a blog post calling for government regulation of facial recognition.8
Also, IBM stopped offering the technology for mass surveillance due to its potential for misuse, such as racial profiling, which violates fundamental human rights.9
Manipulation of human judgment
AI-powered analytics can provide actionable insights into human behavior, yet abusing analytics to manipulate human decisions is ethically wrong.
Real-life example:
Cambridge Analytica sold American voters’ data crawled on Facebook to political campaigns and provided assistance and analytics to the 2016 presidential campaigns of Ted Cruz and Donald Trump.
Information about the data breach was disclosed in 2018, and the Federal Trade Commission fined Facebook $5 billion due to its privacy violations.10
Proliferation of deepfakes
Deepfakes are synthetically generated images or videos in which a person in a media image or video is replaced with someone else’s likeness.
Creating a false narrative using deepfakes can harm people’s trust in the media (which is already at an all-time low).11 This mistrust is dangerous for societies, considering mass media is still the number one option of governments to inform people about emergency events such as a global pandemic or a major earthquake causing widespread damage and casualties.
Real-life example:
The European Commission has opened an investigation into Elon Musk’s platform X over allegations that its AI tool, Grok, was used to generate sexualised deepfake images of real people, following similar action by the UK regulator Ofcom.
If X is found to have breached the EU’s Digital Services Act, it could face fines of up to 6% of its global annual revenue, and regulators may impose interim measures if safeguards are not strengthened.
EU officials and campaigners have condemned the deepfakes as harmful and degrading, particularly to women and children, questioning whether X adequately assessed and mitigated risks linked to powerful AI tools.12
Artificial general intelligence (AGI) / Singularity
The prospect of artificial general intelligence (AGI) or singularity raises ethical concerns about the value of human life as machines surpass human intelligence. At the same time, the path to AGI remains uncertain, with no scientific consensus on whether it will arise from scaling existing architectures such as transformers or from developing fundamentally new approaches, nor on how AGI should ultimately be validated.
Practical dilemmas, such as whether self-driving cars should prioritize the safety of passengers or pedestrians, highlight unresolved moral questions that must be addressed before these technologies are widely deployed. More broadly, the emergence of superintelligent systems challenges human dominance and raises fundamental questions about the rights, responsibilities, and moral frameworks of artificial beings.
We analyzed over 8,500 predictions from scientists, entrepreneurs, and the wider community and found that most experts view AGI as inevitable. Building on this belief, recent surveys of AI researchers estimate its arrival around 2040, a notable shift from earlier forecasts closer to 2060, while entrepreneurs are even more optimistic, projecting timelines near 2030.
Robot ethics
Robot ethics, or roboethics, deals with how humans design, use, and treat robots. Debates on this topic have existed since the 1940s, mainly questioning whether robots should have rights comparable to those of humans and animals.
Author Isaac Asimov is the first one to talk about laws for robots in his short story called “Runaround”. He introduced the Three Laws of Robotics:13
- A robot may not injure a human being or, through inaction, allow a human being to come to harm.
- A robot must obey the orders given by human beings except where such orders would conflict with the First Law.
- A robot must protect its existence as long as such protection does not conflict with the First or Second Law.
How to navigate these dilemmas?
These are hard questions, and innovative, controversial solutions like the universal basic income may be necessary to address them. There are numerous initiatives and organizations aimed at minimizing the potential negative impact of AI.
For instance, the Institute for Ethics in Artificial Intelligence (IEAI) at the Technical University of Munich conducts AI research across various domains such as mobility, employment, healthcare, and sustainability.14
Here are some recommendations for mitigating the controversies surrounding adversarial uses of AI:
Consider UNESCO policies & best practices
Data Governance Policy
This policy emphasizes the importance of detailed frameworks for data collection, use, and governance to ensure individual privacy and mitigate risks. It encourages creating quality datasets for AI training, adopting open and trustworthy datasets, and implementing effective data protection strategies.
For example, establishing standardized datasets for healthcare AI ensures accuracy and reduces biases.
Ethical AI governance
Governance mechanisms must be inclusive, multidisciplinary, and multilateral, incorporating diverse stakeholders like affected communities, policymakers, and AI experts. This approach extends to enforcing accountability and providing redress for harms.
For instance, ensuring fair hiring of AI systems requires ongoing audits to address bias.
Education and research policy
It promotes AI literacy and ethical awareness by integrating AI and data education into curricula. It also prioritizes marginalized groups’ participation and advances ethical AI research.
For example, schools could teach AI basics alongside coding and critical thinking, equipping future generations to navigate AI’s societal impacts.
Health and social well-being
This policy encourages the deployment of AI to improve healthcare, address global health risks, and advance mental health. It highlights the need for AI applications that are medically proven, safe, and efficient.
Gender equality in AI
Gender quality aims to reduce gender disparities in AI by supporting women in STEM fields and avoiding biases in AI systems.
For example, allocating funds to mentor women in AI research and addressing gender biases in job recruitment algorithms.
Environmental sustainability
This policy focuses on assessing and mitigating the environmental impact of AI, such as its carbon footprint and resource consumption. Incentivizing the use of AI in climate prediction and mitigation is encouraged.
For example, AI can be used to monitor deforestation and optimize renewable energy grids.
Readiness assessment methodology (RAM)
This technique helps states evaluate their preparedness to implement ethical AI policies by assessing legal frameworks, infrastructure, and resource availability.
For example, RAM can identify gaps in AI regulation and infrastructure, guiding nations toward ethical AI adoption.
Ethical impact assessment (EIA)
This method assesses the potential social, environmental, and economic impacts of AI projects. Collaborating with affected communities, EIA ensures resource allocation to prevent harm.
For instance, an EIA could identify the risks of bias in a predictive policing system and recommend mitigations.
Global observatory on AI ethics
It refers to a digital platform that offers analyses of AI’s ethical challenges and monitors the global implementation of UNESCO recommendations.
For example, the observatory could provide reports on AI’s societal impacts in various countries.
AI ethics training and public awareness
This approach encourages accessible education and civic engagement to enhance public understanding of AI ethics.
For instance, campaigns to educate users about privacy risks in AI-powered social media platforms can create informed digital citizens.
Figure 1: UNESCO ethical AI policy areas15
Best practices recommended by UNESCO
- Inclusive and multi-stakeholder governance:
- Involve diverse stakeholders, including marginalized communities, in policy creation and AI governance.
- Use multidisciplinary teams to ensure decisions are well-rounded and equitable.
- Example: Holding public consultations when deploying surveillance AI systems.
- Transparency and explainability:
- Develop AI systems with interpretable decision-making processes.
- Balance transparency with safety and privacy concerns.
- Example: Providing users with plain-language explanations of how an AI model makes decisions.
- Sustainability assessments:
- Regularly evaluate AI systems for their environmental impact, including energy consumption and carbon footprint.
- Example: Reducing energy use in training large machine-learning models.
- AI literacy programs:
- Educate the public and policymakers on AI’s ethical implications.
- Incorporate AI ethics into educational curricula at all levels.
- Example: Workshops on privacy risks in AI-powered social media.
- Ongoing audits and accountability mechanisms:
- Establish regular audits for AI systems to detect and address biases, inaccuracies, or ethical breaches.
- Ensure there is a clear process for redress in cases of harm caused by AI.
- Example: Periodic reviews of AI recruitment tools to prevent gender bias.
Learn responsible AI frameworks
Here are some responsible AI frameworks to overcome ethical dilemmas like AI bias:
Transparency
AI developers have an ethical obligation to be transparent in a structured, accessible way since AI technology has the potential to break laws and negatively impact the human experience. To make AI accessible and transparent, knowledge sharing can help.
For example, OpenAI was founded in 2015 as a non-profit AI research laboratory by Elon Musk, Sam Altman, and others, with the mission of developing “digital intelligence” for the benefit of humanity.
However, following its restructuring into a Public Benefit Corporation (PBC), OpenAI now operates as a for-profit entity governed by a non-profit foundation.16
By granting Microsoft an exclusive license to its frontier models, OpenAI has shifted from a transparent and open research model to a proprietary one, sparking significant debate about its original mission.
Explainability
AI developers and businesses need to explain how their algorithms arrive at their predictions to overcome ethical issues that arise with inaccurate predictions. Various technical approaches can explain how these algorithms reach their conclusions and which factors affect their decisions.
Alignment
Numerous countries, companies, and universities are building AI systems, and in most areas, there is no legal framework adapted to the recent developments in AI.
Modernizing legal frameworks at both the country and higher levels (e.g., UN) will clarify the path to ethical AI development. Pioneering companies should spearhead these efforts to create clarity for their industry.
Use AI ethics frameworks and tools
Academics and organizations are increasingly focusing on ethical frameworks to guide the use of AI technologies. These frameworks address the moral implications of AI across its lifecycle, including training AI systems, developing AI models, and deploying intelligent systems.
Here is a list of tools that can help you apply AI ethics practices:
AI governance tools
AI governance tools ensure that AI applications are developed and deployed in alignment with ethical principles. These tools help organizations monitor and control AI programs throughout the AI lifecycle, address risks related to unethical outcomes, and support trustworthy AI.
By implementing compherensive AI governance practices, companies can better manage potential risks and achieve AI compliance with regulatory bodies.
LLMOps
As AI technologies become more sophisticated, the need for specialized tools to oversee and deploy these models has grown.
In this context, LLMOps tools, operational practices used to manage large language models, plays a key role in supporting the ethical use of large language models by helping ensure they do not perpetuate existing inequalities or contribute to issues such as deepfakes.
MLOps
MLOps tools (Machine Learning Operations) involve integrating AI models into production while ensuring alignment with ethical standards.
This practice emphasizes human oversight of autonomous systems, particularly in critical areas such as healthcare and criminal justice.
Data governance
Data governance is crucial for the ethical use of AI, involving responsible data management that trains AI systems.
Effective data governance ensures data protection and considers the social implications of data usage, supporting ethical considerations across the AI lifecycle. This is particularly important as big tech companies shape the future of AI technologies.
FAQs
Reference Links
Cem's work has been cited by leading global publications including Business Insider, Forbes, Washington Post, global firms like Deloitte, HPE and NGOs like World Economic Forum and supranational organizations like European Commission. You can see more reputable companies and resources that referenced AIMultiple.
Throughout his career, Cem served as a tech consultant, tech buyer and tech entrepreneur. He advised enterprises on their technology decisions at McKinsey & Company and Altman Solon for more than a decade. He also published a McKinsey report on digitalization.
He led technology strategy and procurement of a telco while reporting to the CEO. He has also led commercial growth of deep tech company Hypatos that reached a 7 digit annual recurring revenue and a 9 digit valuation from 0 within 2 years. Cem's work in Hypatos was covered by leading technology publications like TechCrunch and Business Insider.
Cem regularly speaks at international technology conferences. He graduated from Bogazici University as a computer engineer and holds an MBA from Columbia Business School.
Be the first to comment
Your email address will not be published. All fields are required.