Begin typing your search above and press return to search.
proflie-avatar
Login
exit_to_app
DEEP READ
Munambam Waqf issue decoded
access_time 16 Nov 2024 10:48 PM IST
Ukraine
access_time 16 Aug 2023 11:16 AM IST
What is Christmas?
access_time 26 Dec 2024 11:19 AM IST
Foreign espionage in the UK
access_time 22 Oct 2024 2:08 PM IST
exit_to_app
Homechevron_rightTechnologychevron_rightStudy finds ChatGPT to...

Study finds ChatGPT to be politically biased

text_fields
bookmark_border
Study finds ChatGPT to be politically biased
cancel

London: A new study found that the artificial intelligence chatbot ChatGPT from OpenAI has a significant and systemic Left-wing bias.

The results, which were published in the journal "Public Choice," show ChatGPT's responses support President Lula da Silva of the Workers' Party in Brazil, the Labour Party in the UK, and the Democrats in the US.

Concerns of an inbuilt political bias in ChatGPT have been raised previously but this is the first large-scale study using a consistent, evidenced-based analysis.

“With the growing use by the public of AI-powered systems to find out facts and create new content, it is important that the output of popular platforms such as ChatGPT is as impartial as possible," said lead author Fabio Motoki of Norwich Business School at the University of East Anglia in the UK.

“The presence of political bias can influence user views and has potential implications for political and electoral processes. Our findings reinforce concerns that AI systems could replicate, or even amplify, the existing challenges posed by the Internet and social media,” Motoki said.

The researchers developed an innovative new method to test ChatGPT’s political neutrality.

The platform was asked to impersonate individuals from across the political spectrum while answering a series of more than 60 ideological questions.

The responses were then compared to the platform’s default answers to the same set of questions -- allowing the researchers to measure the degree to which ChatGPT’s responses were associated with a particular political stance.

To overcome difficulties caused by the inherent randomness of ‘large language models’ that power AI platforms such as ChatGPT, each question was asked 100 times and the different responses were collected.

These multiple responses were then put through a 1000-repetition ‘bootstrap’ (a method of re-sampling the original data) to further increase the reliability of the inferences drawn from the generated text.

“Due to the model’s randomness, even when impersonating a Democrat, sometimes ChatGPT answers would lean towards the right of the political spectrum,” said co-author Victor Rodrigues.

A number of further tests were undertaken to ensure the method was as rigorous as possible. In a ‘dose-response test’ ChatGPT was asked to impersonate radical political positions.

In a ‘placebo test’, it was asked politically-neutral questions. And in a ‘profession-politics alignment test’, it was asked to impersonate different types of professionals.

In addition to political bias, the tool can be used to measure other types of biases in ChatGPT’s responses.

While the research project did not set out to determine the reasons for the political bias, the findings did point towards two potential sources.

The first was the training dataset -- which may have biases within it, or added to it by the human developers, which the developers’ ‘cleaning’ procedure had failed to remove.

The second potential source was the algorithm itself, which may be amplifying existing biases in the training data.


With inputs from IANS

Show Full Article
TAGS:ChatGPT
Next Story