To Know
-
ACTING WITH A PURPOSE
Biased by Design
How artificial intelligence reflects human assumptions
CALL to Start
Apr 2026
To Know
-
ACTING WITH A PURPOSE
Biased by Design
How artificial intelligence reflects human assumptions
CALL to Start
Apr 2026
EDITION EDITORIAL & OVERVIEW
Biased by Design
#
68
CALL to Start
-
Apr 2026

AI New Era

Today we can say that AI has reached frontiers that would have been unimaginable ten years ago. Even in the short term, it has already brought significant changes to human behavior, and these shifts often come with unintended consequences. Much has been said about how AI has made various fields more accessible areas once dominated by a select few, such as the arts, computing, or even scientific research. However, alongside this new accessibility, we also face challenges that humanity must confront collectively.

Misleading Representations

One of the most prominent challenges in recent years is the generation of images that portray communities, cultures, genders, and sexual orientations in disrespectful or distorted ways. For example, when discussing images related to World War II, the Gemini model produced historically inaccurate depictions during its launch, such as Black Nazis or Asian Vikings, among others [1]. This not only represents a historical distortion but also plants doubt in the minds of less informed individuals about the accuracy of historical events. In this sense, those who control AI outputs also influence the narratives that are read and reproduced.

Embedded Biases

MIT Review [2] published an article highlighting biases found in AI systems, warning precisely about this issue. These systems can reinforce hierarchies, placing certain groups in positions of prominence while marginalising others. Similar criticisms have been directed at Google for the erasure or sexualisation of Black women in its search results [3].

Ethical Responsibility

Although concerns about misleading narratives may seem exaggerated, their consequences are serious, affecting both education and minority rights. Addressing biased discourse requires action not only from researchers but also from the companies developing these tools, from workers demanding accountability, and from political leaders and regulators committed to ensuring ethical use of AI technologies.

This article is brought to you by the Diversity & Inclusion team.

Sources

[1]  https://humanities.org.au/power-of-the-humanities/black-nazis-asian-vikings-and-other-problems-with-generative-ai/

[2] https://www.technologyreview.com/2023/03/22/1070167/these-news-tool-let-you-see-for-yourself-how-biased-ai-image-models-are/

[3]https://time.com/5209144/google-search-engine-algorithm-bias-racism/

[4] https://www.nbcnews.com/news/us-news/google-engineer-fired-writing-manifesto-women-s-neuroticism-sues-company-n835836

No items found.
No items found.

AI New Era

Today we can say that AI has reached frontiers that would have been unimaginable ten years ago. Even in the short term, it has already brought significant changes to human behavior, and these shifts often come with unintended consequences. Much has been said about how AI has made various fields more accessible areas once dominated by a select few, such as the arts, computing, or even scientific research. However, alongside this new accessibility, we also face challenges that humanity must confront collectively.

No items found.
No items found.

AI New Era

Today we can say that AI has reached frontiers that would have been unimaginable ten years ago. Even in the short term, it has already brought significant changes to human behavior, and these shifts often come with unintended consequences. Much has been said about how AI has made various fields more accessible areas once dominated by a select few, such as the arts, computing, or even scientific research. However, alongside this new accessibility, we also face challenges that humanity must confront collectively.

Misleading Representations

One of the most prominent challenges in recent years is the generation of images that portray communities, cultures, genders, and sexual orientations in disrespectful or distorted ways. For example, when discussing images related to World War II, the Gemini model produced historically inaccurate depictions during its launch, such as Black Nazis or Asian Vikings, among others [1]. This not only represents a historical distortion but also plants doubt in the minds of less informed individuals about the accuracy of historical events. In this sense, those who control AI outputs also influence the narratives that are read and reproduced.

Embedded Biases

MIT Review [2] published an article highlighting biases found in AI systems, warning precisely about this issue. These systems can reinforce hierarchies, placing certain groups in positions of prominence while marginalising others. Similar criticisms have been directed at Google for the erasure or sexualisation of Black women in its search results [3].

Ethical Responsibility

Although concerns about misleading narratives may seem exaggerated, their consequences are serious, affecting both education and minority rights. Addressing biased discourse requires action not only from researchers but also from the companies developing these tools, from workers demanding accountability, and from political leaders and regulators committed to ensuring ethical use of AI technologies.

This article is brought to you by the Diversity & Inclusion team.

Sources

[1]  https://humanities.org.au/power-of-the-humanities/black-nazis-asian-vikings-and-other-problems-with-generative-ai/

[2] https://www.technologyreview.com/2023/03/22/1070167/these-news-tool-let-you-see-for-yourself-how-biased-ai-image-models-are/

[3]https://time.com/5209144/google-search-engine-algorithm-bias-racism/

[4] https://www.nbcnews.com/news/us-news/google-engineer-fired-writing-manifesto-women-s-neuroticism-sues-company-n835836

No items found.
No items found.
Go Back
Let Us Know Your Thoughts About Our Newsletter!
Start by
Saying Hi!
© 2026 Celfocus. All rights reserved.
Let Us Know Your Thoughts About Our Newsletter!
Start by
Saying Hi!
© 2026 Celfocus. All rights reserved.
50
60
ai-driven-cloud-platforms
50
biased-by-design
40
operating-in-uncertainty
99
breaking-ground
55
human-side
43
hop-on-board
95
to-be-defined
80
step-inside
45
kubernetes-vs-serverless
55
on-the-radar
90
strategic-shift
98
rolling-out-
100
new-chapter
96
heres-to-you