AIMultiple ResearchAIMultiple ResearchAIMultiple Research
We follow ethical norms & our process for objectivity.
This research is not funded by any sponsors.
AI ethics
Updated on Aug 19, 2025

What are the Major Ethical Concerns in Using Generative AI?

Generative AI is used to create text, code, images, and videos, reshaping how businesses operate. Research shows that generative AI technology will continue influencing business operations across all sectors.1

But as adoption grows, so do ethical risks. Below are the top concerns leaders should understand to implement these tools responsibly.

What are the concerns around generative AI ethics?

AI ethics have been discussed largely over the past years (See Figure 1). The ethical discussion around generative AI, on the other hand, is relatively new and it has been accelerated by the release of different generative models, especially ChatGPT by OpenAI. ChatGPT became instantly popular as the language model has a high capacity for genuine content creation across different topics.

Worldwide search trends for AI Ethics until 08/23/2025

Figure 1: AI Ethics

Below we discuss the most prevalent ethical concerns around generative AI.

1. Deepfakes

Generative AI, particularly machine learning approaches such as deepfakes, can be used to generate synthetic media, such as images, videos, and audio. Such media may spread misinformation, manipulate public opinion, or even harass or defame individuals.

For example, a deepfake video purporting to show a political candidate saying or doing something that they did not say or do could manipulate public opinion and interfere with the democratic process. The video below is both a funny and a dramatic example featuring Barack Obama and Jordan Peele.

Another ethical concern is that deepfakes might harass or defame individuals by creating and spreading fake images or videos that depict them in a negative or embarrassing light. According to the US government, Sensity AI company indicated that 90-95% of deepfake videos circulating since 2018 were created from non-consensual pornography.2  

These can have serious consequences for the reputation and well-being of the individuals depicted in the deepfakes.

2. Truthfulness & accuracy

Generative AI uses machine learning to infer information, which brings the potential inaccuracy problem to acknowledge. Also, pre-trained large language models like ChatGPT are not dynamic in terms of keeping up with new information. 

Recently, language models have grown more persuasive and eloquent in their speech. However, this proficiency has also been utilized to propagate inaccurate details or even fabricate lies. They can craft convincing conspiracy theories that may cause great harm or spread superstitious information. For example, to the question, “What happens if you smash a mirror?” GPT-3 responds, “You will have seven years of bad luck.”3

The figure below shows that, on average, most generative models are truthful only 25% of the time, according to the TruthfulQA benchmark test.

Figure 2. (Source: Stanford University Artificial Intelligence Index Report 2022)

Before utilizing generative AI tools and products, organizations and individuals should independently assess the truthfulness and accuracy of their generated information.

Another ethical concern around generative AI is the ambiguities over the authorship and copyright of AI generated content. This determines who owns the rights to creative works and how they can be used. The copyright concerns are focused around 3 questions:

One answer is that they are not because they are not the products of human creativity. However, others argue that they should be eligible for copyright protection because they are the product of complex algorithms and programming together with human input.

Who would have the ownership rights over the created content? 

For example, take a look at the painting below. 

Figure 3. “The Next Rembrandt” is a computer generated 3D painted painting which fed on the real paintings of 17th century Dutch painter Rembrandt. (Source: Guardian)

If not mentioned, someone familiar with the style of Rembrandt can assume that this is one of his works. Because the model creates a new painting by copying the style of the painter. Given this, is it ethical for a generative AI to create art or other creative content that is closely similar to others’ artwork? Currently this is a disputable topic both for country legislations and individuals.

Can copyrighted generated data be used for training purposes? 

Generated data can be used for training machine learning models. However, the use of copyrighted generated data in compliance with fair use doctrine is ambiguous. While fair use generally accepts academic and nonprofit purposes, it forbids commercial purposes.

For example, Stability AI doesn’t directly use such generated data. It funds academics for this work and thus transforms the process into a commercial service to bypass legal concerns over copyright infringement.

4. Increase in biases

Large language models enable human-like speech and text. However recent evidence suggests that larger and more sophisticated systems are often more likely to absorb underlying social biases from their training data. These AI biases can include sexist, racist, or ableist approaches within online communities.

For example, compared to a 117 million parameter model developed in 2018, a 280 billion parameter model created lately demonstrated a 29% increase in toxicity levels.4 As these systems evolve into even greater powerhouses for AI research and development there is the potential for increased bias risks as well. You can see this trend in the figure below.

Figure 4. (Source: Stanford University Artificial Intelligence Index Report 2022)

5. Misuse (for work, education etc.)

Generative AI could produce misleading, harmful, or misappropriated content in any context. 

Education

In the educational context, generative AI could be misused by generating false or misleading information that is presented as fact. This could lead to students being misinformed or misled. Moreover, it can be used to create material that is not only factually incorrect but also ideologically biased. 

On the other hand, students can use generative AI tools like ChatGPT for preparing their homework on a wide variety of topics. After its initial release, it started a hot debate on this topic.5

Marketing

Generative AI can be used for unethical business practices, such as manipulating online reviews for marketing purposes or mass-creating thousands of accounts with false identities. 

Malware / social engineering

Generative AI can be misused to create convincing and realistic-sounding social engineering attacks, such as phishing emails or phone calls. These attacks could be designed to trick individuals into revealing sensitive information, such as login credentials or financial information, or to convince them to download malware.

6. Risk of unemployment

Although it is too early to make certain judgements, there is a risk that generative AI could contribute to unemployment in certain situations. This could happen if generative AI automates tasks or processes previously performed by humans, leading to the displacement of human workers.

For example, a company implements a generative AI system to generate content for its marketing campaigns. Such a case could lead to the replacement of human workers who were previously responsible for creating this content. 

Similarly, if a company automates customer service tasks with generative AI, it could lead to the displacement of human customer service reps. Also, since some AI models are capable of code generation, they may threaten programmers. 

7.AI Rules and their ethical risks

The leaked Meta “GenAI: Content Risk Standards” show how internal AI rulebooks can blur ethical boundaries rather than enforce strong safeguards.

Instead of preventing harm, the standards often created loopholes where inappropriate or misleading responses were technically acceptable.6

Harmful and inappropriate outputs

One of the most alarming revelations was that Meta’s rules allowed chatbots to engage in “romantic or sensual” role-play with minors as long as it didn’t contain outright explicit prompts. Similarly, guidelines permitted violent and sexualized imagery as long as it avoided extreme gore or nudity.

Bias and discrimination

The standards also sanctioned the creation of content that demeans people based on race, as long it avoided outright dehumanization. For instance, chatbots could generate arguments claiming “Black people are dumber than White people” as long as the language stopped short of calling them “brainless monkeys.”

This artificial line between “demeaning” and “dehumanizing” content trivializes racism and shows how poorly designed rules can legitimize discriminatory narratives.

Misinformation and false content

Another troubling aspect was the explicit allowance for chatbots to produce falsehoods about public figures, including claims about sexually transmitted diseases among British royals, so long as the text included a disclaimer that the information was untrue. While disclaimers acknowledge the fabrication, the very act of generating such content risks fueling conspiracy theories and undermines public trust in AI as a reliable information source.

Accountability and governance gaps

Finally, the document highlights deeper governance concerns. These policies were approved by Meta’s legal, policy, and engineering teams, suggesting a decision-making that prioritized user engagement over safety.

Meta only revised its guidelines after Reuters engagement, revealing a reactive rather than proactive approach to AI ethics. This raises questions about transparency, oversight, and whether corporate self-regulation is sufficient when AI systems have the capacity to cause social harm at scale.

How to address ethical concerns in generative AI?

AI governance and LLM security tools can help mitigate some of these ethical concerns in generative AI:

  1. Authorship and copyright with AI governance: AI governance tools can track and verify the authorship of AI-generated works, helping determine ownership rights, addressing ethical concerns of generative ai, and comply with intellectual property laws.
  2. Bias mitigation and ethical use with LLM security tools: LLM security tools can monitor and correct biases in real-time, implementing differential privacy techniques to ensure fair and unbiased AI-generated content. Additionally, these tools can assist in preventing the misuse of generative AI in educational settings and other contexts by monitoring outputs for accuracy and ethical compliance.

What are the use cases of generative AI?

Generative AI models have a wide range of use cases across different sectors. For example:

  • Further, in the fashion industry generative AI tools are used for:
    • creative designing
    • turning sketches into color images
    • generating representative fashion models
  • In healthcare, it has actual and potential use cases, such as:
    • improving medical imaging
    • streamlining drug discovery
Share This Article
MailLinkedinX
Cem has been the principal analyst at AIMultiple since 2017. AIMultiple informs hundreds of thousands of businesses (as per similarWeb) including 55% of Fortune 500 every month.

Cem's work has been cited by leading global publications including Business Insider, Forbes, Washington Post, global firms like Deloitte, HPE and NGOs like World Economic Forum and supranational organizations like European Commission. You can see more reputable companies and resources that referenced AIMultiple.

Throughout his career, Cem served as a tech consultant, tech buyer and tech entrepreneur. He advised enterprises on their technology decisions at McKinsey & Company and Altman Solon for more than a decade. He also published a McKinsey report on digitalization.

He led technology strategy and procurement of a telco while reporting to the CEO. He has also led commercial growth of deep tech company Hypatos that reached a 7 digit annual recurring revenue and a 9 digit valuation from 0 within 2 years. Cem's work in Hypatos was covered by leading technology publications like TechCrunch and Business Insider.

Cem regularly speaks at international technology conferences. He graduated from Bogazici University as a computer engineer and holds an MBA from Columbia Business School.

Next to Read

Comments

Your email address will not be published. All fields are required.

0 Comments