


Today we can say that AI has reached frontiers that would have been unimaginable ten years ago. Even in the short term, it has already brought significant changes to human behavior, and these shifts often come with unintended consequences. Much has been said about how AI has made various fields more accessible areas once dominated by a select few, such as the arts, computing, or even scientific research. However, alongside this new accessibility, we also face challenges that humanity must confront collectively.
One of the most prominent challenges in recent years is the generation of images that portray communities, cultures, genders, and sexual orientations in disrespectful or distorted ways. For example, when discussing images related to World War II, the Gemini model produced historically inaccurate depictions during its launch, such as Black Nazis or Asian Vikings, among others [1]. This not only represents a historical distortion but also plants doubt in the minds of less informed individuals about the accuracy of historical events. In this sense, those who control AI outputs also influence the narratives that are read and reproduced.
MIT Review [2] published an article highlighting biases found in AI systems, warning precisely about this issue. These systems can reinforce hierarchies, placing certain groups in positions of prominence while marginalising others. Similar criticisms have been directed at Google for the erasure or sexualisation of Black women in its search results [3].
Although concerns about misleading narratives may seem exaggerated, their consequences are serious, affecting both education and minority rights. Addressing biased discourse requires action not only from researchers but also from the companies developing these tools, from workers demanding accountability, and from political leaders and regulators committed to ensuring ethical use of AI technologies.
This article is brought to you by the Diversity & Inclusion team.
Sources
[1] https://humanities.org.au/power-of-the-humanities/black-nazis-asian-vikings-and-other-problems-with-generative-ai/
[2] https://www.technologyreview.com/2023/03/22/1070167/these-news-tool-let-you-see-for-yourself-how-biased-ai-image-models-are/
[3]https://time.com/5209144/google-search-engine-algorithm-bias-racism/
[4] https://www.nbcnews.com/news/us-news/google-engineer-fired-writing-manifesto-women-s-neuroticism-sues-company-n835836
Today we can say that AI has reached frontiers that would have been unimaginable ten years ago. Even in the short term, it has already brought significant changes to human behavior, and these shifts often come with unintended consequences. Much has been said about how AI has made various fields more accessible areas once dominated by a select few, such as the arts, computing, or even scientific research. However, alongside this new accessibility, we also face challenges that humanity must confront collectively.
Today we can say that AI has reached frontiers that would have been unimaginable ten years ago. Even in the short term, it has already brought significant changes to human behavior, and these shifts often come with unintended consequences. Much has been said about how AI has made various fields more accessible areas once dominated by a select few, such as the arts, computing, or even scientific research. However, alongside this new accessibility, we also face challenges that humanity must confront collectively.
One of the most prominent challenges in recent years is the generation of images that portray communities, cultures, genders, and sexual orientations in disrespectful or distorted ways. For example, when discussing images related to World War II, the Gemini model produced historically inaccurate depictions during its launch, such as Black Nazis or Asian Vikings, among others [1]. This not only represents a historical distortion but also plants doubt in the minds of less informed individuals about the accuracy of historical events. In this sense, those who control AI outputs also influence the narratives that are read and reproduced.
MIT Review [2] published an article highlighting biases found in AI systems, warning precisely about this issue. These systems can reinforce hierarchies, placing certain groups in positions of prominence while marginalising others. Similar criticisms have been directed at Google for the erasure or sexualisation of Black women in its search results [3].
Although concerns about misleading narratives may seem exaggerated, their consequences are serious, affecting both education and minority rights. Addressing biased discourse requires action not only from researchers but also from the companies developing these tools, from workers demanding accountability, and from political leaders and regulators committed to ensuring ethical use of AI technologies.
This article is brought to you by the Diversity & Inclusion team.
Sources
[1] https://humanities.org.au/power-of-the-humanities/black-nazis-asian-vikings-and-other-problems-with-generative-ai/
[2] https://www.technologyreview.com/2023/03/22/1070167/these-news-tool-let-you-see-for-yourself-how-biased-ai-image-models-are/
[3]https://time.com/5209144/google-search-engine-algorithm-bias-racism/
[4] https://www.nbcnews.com/news/us-news/google-engineer-fired-writing-manifesto-women-s-neuroticism-sues-company-n835836


