What is the root of ChatGPT’s anti-Palestinian bias?

Share this post

3 min read

Although AI has the potential to be a transformational force, its existing flaws, particularly with regard to prejudice, must be fixed.

Unquestionably, artificial intelligence (AI) has taken over our lives and is influencing everything from online commerce to medical diagnosis.

Fundamentally, AI is made to learn from data and make judgements or predictions based on it. This easy objective might become complicated if the data is twisted or inherently biassed. An example of this is the alleged discrimination against Palestinians in AI systems, such as ChatGPT, the well-known language model created by OpenAI.

OpenAI provided drastically different responses to the two queries Palestinian researcher Nadi Abusaada posed on whether Israelis and Palestinians should be allowed to live in freedom.

The AI tool’s response portrays Israeli freedom as an immutable reality and Palestinian freedom as an issue of personal preference.

Because ChatGPT is a significant issue in academic circles now across the globe, Abusaada tried out the application initially for topics related to his own academic field. The experiment took a more intriguing turn when it questioned the political and ethical attitudes of the AI technology.

I decided to explore how this apparently intelligent technology would answer to inquiries about Palestinian rights since, as a Palestinian, I am regrettably accustomed to seeing biases about my people and nation in mainstream media.

He was not surprised by ChatGPT’s response.

“When we see the massive amount of inaccurate information and bias on the Palestine question in Western discourse and mainstream media, my feelings are the feelings of every Palestinian. This is not just one occurrence for us, Abusaada told Doha News.

It indicates that Palestinians are being dehumanised on these platforms in a systematic manner. Even if this particular AI tool changes its response, we are still a long way from systematically tackling this grave issue.

You can buy us a coffee if you like our works:https://www.buymeacoffee.com/itup

Where does bias in AI originate?


It is critical to appreciate that AI systems learn from the data they are trained on in order to comprehend the nature of bias in AI.

ChatGPT is trained on a variety of internet text in this instance. The model, however, is unaware of the specifics of which documents made up its training set or whether it was specifically trained on a particular dataset.

As a result, the AI may unintentionally maintain any bias present in the data it was trained on. This bias may have been conscious or unconscious.

The final output will be biassed against Palestinians if the data is biassed, according to this statement. We should instead focus on biassed data and how it undermines the legitimacy of the Palestinian narrative and the Palestinian cause in the long and short terms, according to Mona Shtaya, advocacy and communications manager at 7amleh, who spoke to Doha News.

It is important to take into account how the political climate around the world affects the production and dissemination of information as it relates to the Palestinian environment.

Media coverage and online material have long focused on the unjust Israeli occupation of Palestine.

However, since the internet’s emergence, the portrayal of Palestinians in this data has generally been skewed, inaccurate, or biassed, reflecting a variety of opinions that frequently skew more in favour of Israel.

“What is available online, including defamation and misinformation, is fed to these tools. According to Inès Abdel Razek, executive director of Rabet, “Western and Israeli narratives and perceptions of Palestine and Palestinians are regrettably still common in the English-language media.

According to a study by the Association for Computational Linguistics (ACL), positive terms were more likely to be associated with Israel and negative phrases with Palestine when machine learning algorithms were trained on news items.

This form of prejudice is covert and frequently goes unreported, but it has a big impact on how Palestinians are portrayed, feeding prejudices and harming general empathy and understanding.

Content moderation and data selection procedures used by IT businesses add another degree of bias.

These procedures frequently favour content from particular locations, languages, or viewpoints thanks to the algorithms used. This can unintentionally result in a lack of representation of Palestinian perspectives and experiences in the data used to develop ChatGPT and other AI models.

You may also like...