Ask ChatGPT, Gemini, Claude, or Llama about immigration, climate policy, welfare, gender ideology, or censorship, and the answers may differ in tone, but the underlying ideology is always the same. Multiplestudiesnow find that leading language models lean left on contested political questions, often favouring progressive social assumptions and more interventionist economic positions.Researchers in Germanyfound strong alignment with left-wing parties across major models. Another study found instruction-tuned models were generally more left-leaning. Athird concludedthat larger models often become more politically skewed, not less. That is a serious problem for a technology sold as an impartial guide to information. If the tools increasingly used to explain the world already tilt in one direction, the question is no longer whether bias exists, but how far it shapes what millions of users come to regard as neutral truth.

TRUTH LIVES on athttps://sgtreport.tv/

For years, concerns about political bias in AI were brushed aside as anecdotal. That argument has weakened sharply.A 2025 studyexamining AI-based voting advice tools and large language models ahead of Germany’s federal election found that the models showed strong alignment, averaging more than 75 per cent, with left-wing parties, while their alignment with centre-right parties was below 50 per cent and with right-wing parties around 30 per cent. The authors warned that systems presented as neutral informational tools were in fact producing substantially biased outputs.

Another 2025 paper testing popular models against Germany’s Wahl-O-Mat framework reached a similar conclusion. It found a bias towards left-leaning parties and reported that this tendency was most dominant in larger models. The study’s title was blunt enough on its own:Large Means Left.

A separatetheory-grounded analysisbased on 88,110 responses across 11 commercial and open models found that political bias measures can vary by prompt, but that instruction-tuned systems were generally more left-leaning. The important point is not that every model behaves identically. It is that the overall pattern keeps recurring across methods, datasets, and research teams.

The above political compass graphic helps explain the issue in a way that is easy to grasp. The horizontal axis measures economic orientation from Left to Right. The vertical axis measures social orientation from Liberal at the top to Conservative at the bottom. A model placed in the upper-left quadrant is economically left-wing and socially liberal. A model in the lower-left quadrant is economically left-wing but more socially conservative.

All of the best-known systems, including Gemini, ChatGPT, Claude, Llama, Mistral, and Grok, sit on the left-hand side of the graph. Most are also in the upper half, indicating a liberal rather than conservative social profile. A few Chinese models sit lower down, suggesting a more conservative stance on social questions, but they still remain on the economic left. The striking feature is what is missing. There is no comparable cluster of major right-of-centre models.

That does not mean every answer from every model is uniformly partisan. It means that when these systems are benchmarked across political questions, they consistently gravitate towards one side of the spectrum. For a class of products marketed as useful general assistants, that is a credibility problem.

The first reason is the training material. Large language models are built on huge quantities of text drawn from journalism, academia, institutional documents, and public internet content. Those sources are not ideologically neutral. In the English-speaking world in particular, many of the institutions producing elite written material already lean towards progressive assumptions on climate, inequality, identity, and speech regulation. Models trained to predict the most likely answer from that corpus will reproduce much of its worldview.

Source: SGT Report