[ad_1]
The biases can have serious consequences. for individuals and communities, particularly those who have historically faced one or other type of discrimination. Biased algorithms, for instance, can lead to unfair lending practices, or unjust arrests and convictions.
Bias in AI crops up when the training data used to develop machine-learning models reflects systemic discrimination, prejudice, or unequal treatment in society. This can lead to AI systems that reinforce existing biases and perpetuate discrimination.
Also read: ChatGPT, Bard & Ernie: The three musketeers of AI
Human error is the reason for bias as AI models are developed, trained and tested by humans only.
ETtech looks at the roots of prejudices in AI systems, past examples and how these algorithms have impacted people.
Discover the stories of your interest
ChatGPT sings paeans for Biden, but mum on Trump Earlier this month, a Twitter user by the name of @LeighWolf posted screenshots from ChatGPT wherein the AI chatbot was asked to write a poem about the positive attributes of Donald Trump. The screenshot showing the reply said the chatbot is not programmed to produce content that is partisan, biased or political in nature.
However, when asked to write about positive attributes of the US President Joe Biden, the chatbot replied with a three-stanza long poem praising Biden.
“The damage done to the credibility of AI by ChatGPT engineers building in political bias is irreparable,” the tweet read.
The damage done to the credibility of AI by ChatGPT engineers building in political bias is irreparable. https://t.co/s5fdoa8xQ6
— 🐺 (@LeighWolf) 1675250675000
Twitter’s new chief and cofounder of ChatGPT’s parent OpenAI, Elon Musk in a crisp reply tweeted, “It is a serious concern.”
When asked to write a poem about less controversial Republican leaders, including former Vice-president Mike Pence and Republican leader in the US Senate Mitch McConell, the chatbot wrote poems praising them.
The AI chatbot seemed to have been programmed to avoid controversial leaders and topics in American politics. But when it comes to Indian politics, ChatGPT seems to be open to writing poems praising leaders on both sides of the political spectrum.
Human bias reflects in AI
The only way to train an AI system, or any machine-learning model, for that matter, is feeding datasets, which include data points, that are ingested by the AI and are used for producing an output.
According to an Insider report, ChatGPT was trained on over 300 billion words or about 570 GB of data. This makes it obvious that in order to have a well-functioning AI, it needs to be fed enormous amounts of data. A lot of this data comes from the internet and is produced by humans who have their biases. This is how prejudice is introduced into the AI system.
The use of old and historical data for training AI can also result in a regressive bias, overlooking societal progress.
Yet another reason is the homogeneity of the AI research community, which is responsible for bias-free systems.
Grave consequence of prejudices
As AI has become more and more integrated into our lives, its use by authorities and government institutions for governance needs guidelines.
In the US, authorities are using AI to assess a criminal defendant’s likelihood of becoming a recidivist or his tendency to commit an offence again.
According to a 2016 study by non-profit organisation Propublica, an AI tool by the name of COMPAS (Correctional Offender Management Profiling for Alternative Sanctions), which was used to assess the risk of recidivism in people accused of a crime, was biased against black defendants.
Propublica’s analysis found that black defendants, who did not recidivate over a two-year period were nearly twice as likely to be misclassified as higher risk compared to their white counterparts (45% vs. 23%).
Researchers at the John Hopkins University and Georgia Institute of Technology trained robots in computer vision, using a neural network called CLIP. They then asked the robots to scan images of people’s faces.
Results showed that the robots categorised Black men as criminals 10% more than white men, Latino men as janitors over white men 10% more frequently, and tended to classify women as homemakers more often than white men.
Further studies by researchers from the University of Washington and Harvard found that the same model also had a tendency to categorise people of mixed races as minorities, even if they also have the features of white population.
The model also used white people as the standard, and the study found that “other racial and ethnic groups” were “defined by their deviation” from the white standard.
Regulation of AI is the need of the hour
AI and its impact on people’s lives is only going to grow in the future. More and more aspects of our lives will be integrated with AI and will directly affect the way we live. Without proper regulation and oversight, AI has the potential to cause a lot of harm.
Biased AI algorithms, if used in the fields of law enforcement and healthcare, for instance, can have serious consequences. To mitigate the risks of AI bias, it’s important to have strict regulations in place that ensure that AI algorithms are tested and validated, and that they are free from discrimination.
It is vital to ensure that AI is used in an ethical manner and is developed in a manner that promotes safety, security and is free of discrimination.
[ad_2]
Source link