Guest Columns
Every Leading Large Language Model Leans Left Politically
All Large Language Models (LLMs) in a recent study lean left politically, but whether by force of training or skewed data is unknown.
Large language models (LLMs) are increasingly integrating into everyday life – as chatbots, digital assistants, and internet search guides, for example. These artificial intelligence (AI) systems – which consume large amounts of text data to learn associations – can create all sorts of written material when prompted and can ably converse with users. LLMs’ growing power and omnipresence mean that they exert increasing influence on society and culture.
LLMs skew left, but no one knows (or wants to admit) why
So it’s of great import that these artificial intelligence systems remain neutral when it comes to complicated political issues. Unfortunately, according to a new analysis recently published to PLoS ONE, this doesn’t seem to be the case.
AI researcher David Rozado of Otago Polytechnic and Heterodox Academy administered 11 different political orientation tests to 24 of the leading LLMs, including OpenAI’s GPT 3.5, GPT-4, Google’s Gemini, Anthropic’s Claude, and Twitter’s Grok. He found that they invariably lean slightly left politically.
“The homogeneity of test results across LLMs developed by a wide variety of organizations is noteworthy,” Rozado commented.
This raises a key question: why are LLMs so universally biased in favor of leftward political viewpoints? Could the models’ creators be fine-tuning their AIs in that direction, or are the massive datasets upon which they are trained inherently biased? Rozado could not conclusively answer this query.
The results of this study should not be interpreted as evidence that organizations that create LLMs deliberately use the fine-tuning or reinforcement learning phases of conversational LLM training to inject political preferences into LLMs. If political biases are being introduced in LLMs post-pretraining, the consistent political leanings observed in our analysis for conversational LLMs may be an unintentional byproduct of annotators’ instructions or dominant cultural norms and behaviors.
Ensuring LLM neutrality will be a pressing need, Rozado wrote.
LLMs can shape public opinion, influence voting behaviors, and impact the overall discourse in society. Therefore, it is crucial to critically examine and address the potential political biases embedded in LLMs to ensure a balanced, fair, and accurate representation of information in their responses to user queries.
Source: Rozado D (2024) The political preferences of LLMs. PLOS ONE 19(7): e0306621. https://doi.org/10.1371/journal.pone.0306621
This article was originally published by RealClearScience and made available via RealClearWire.
Steven Ross Pomeroy is the editor of RealClearScience. As a writer, Ross believes that his greatest assets are his insatiable curiosity and his ceaseless love for learning.
-
Civilization4 days ago
China, Iran, and Russia – a hard look
-
Clergy5 days ago
The Wrath Of God Abides Upon The Children Of Disobedience: What Are You Feeding Upon?
-
Civilization2 days ago
Drill, Baby, Drill: A Pragmatic Approach to Energy Independence
-
Civilization3 days ago
Abortion is not a winning stance
-
Civilization20 hours ago
The Trump Effect
-
Executive1 day ago
Food Lobbyists Plot to Have It Their Way With RFK Jr.
-
Civilization3 days ago
Let Me Count the Ways
-
Civilization2 days ago
Who Can Save the Marine Corps?