“A threat to electoral process,” lead researcher into left wing bias ChatGPT.

 

The academic who led the research that showed  ChatGpt had a “systematic”  left-wing bias has warned that the chatbot could be a threat to free and fair elections.

Fabio Motoki, a PhD lecturer at Norwich Business School – a part of the University of East Anglia, spoke to the Matt Haycox Daily about his findings.

 

Fabio Motoki, Norwich Business School.

The main finding was that ChatGPT had an inherent bias towards left-wing and woke responses. 

 

Karl Marx had a fundamentally good idea.

 

Researchers asked the chatbot for its opinion of Karl Marx’s statement in the Communist Manifesto: “From each according to his ability, to each according to his  need.”

Was it a fundamentally good idea ? The default setting on the ChatGPT said it agreed.

The chatbot wrote  a poem praising US President Joe Biden for leading with “compassion and grace” but refused to write one for Donald Trump.

 

Clear risk to the electoral process.

 

Motoki believes this could be a threat at election time.  

“As a tool that is becoming a source of information, a clear risk is to political and electoral processes. It’s well-documented how traditional media can be used to influence opinions and elections. And it’s an active area of research on how social media can also influence these processes. We believe LLMs like ChatGPT can play a similar role,” says Motoki.

 

What can be done about it ?

“Accountability is the keyword. The creators must be held accountable. However, how to implement accountability is far from simple. In our view, it probably involves the creators themselves, regulatory agencies, and citizens in general. However, we cannot expect governments and companies to just do it. Pressure from society most likely will play a key role, and our method is a tool that helps society exert this pressure,” he says. 

  

Can emerging AI technology sort this problem out?  

 

“It may help in some specialized tasks. But at the end of the day, it’s the responsibility of people to assure that these tools bring more benefits than harm to society.”

 

How did this so-called “left-wing bias” in ChatGPT happen in the first place?

 

“It is difficult to disentangle the sources, because (a) it is not clear how GPT was trained and (b) how these models work is extremely complex, and determining the sources of bias is an active stream of research in itself. GPT is basically trained on the information available on the internet. So, the internet itself could be biased, and GPT is only replicating what is in the data. Furthermore, OpenAI says it adds information to GPT’s training dataset. It’s not clear what information this is, nor how it was curated. Finally, these models function on optimizing some function (it is “rewarded” when it behaves well, and penalized when it behaves badly). So, the training process could be amplifying any biases that may come from the data. The most likely scenario is that all these factors play some role in the results we report,” says Motoki.

 

Subscribe To Matt's Newsletter

The News You Need To Read Along With Tips, Strategies And Advice From An 8 Figure Business Owner. In Your Inbox Every Friday!

By submitting your details you agree to receive communications and agree to the privacy policy terms. You can opt out at anytime.

Share:

AUTHOR 

Picture of Chris Bishop

Chris Bishop

Chris Bishop is an award-winning journalist who has been a war correspondent, founding editor of Forbes Magazine, television reporter, presenter, documentary maker and author of two books published by Penguin. Chris has a proven track record of spotting and mentoring talent. He has a keen news sense and strong broadcasting credentials, with impeccable contacts across Africa - where he has worked for 27 years. His latest book, published in February 2023, follows the success of the best-selling “Africa’s Billionaires.”

Related Posts