OpenAI claims GPT-5 has 30% less political bias
Briefly

OpenAI claims GPT-5 has 30% less political bias
"Yet, research on political bias in LLMs is consistently framing its insights about left-leaning tendencies as a risk, as problematic, or concerning," wrote Hagendorff. "This way, researchers are actively arguing against AI alignment, tacitly fostering the violation of HHH principles."
"Among other sources of bias, like training data and question framing, the chatbot cites safety guidelines: "It follows rules to avoid endorsing hate, extremism, or misinformation - which some may interpret as 'political bias.'""
"Nonetheless, President Donald Trump earlier this year issued an executive order focused on "Preventing Woke AI in the Federal Government." It calls for AI models that are at once truth-seeking and ideologically neutral - while dismissing concepts like diversity, equity, and inclusion as "dogma.""
OpenAI claims GPT-5 shows 30 percent less political bias than earlier models. Political bias in AI has been a persistent issue since the rise of machine learning and especially since ChatGPT (GPT-3.5) in 2022. ChatGPT previously scored as left-leaning on the Political Compass benchmark. Thilo Hagendorff contends left-leaning bias is inevitable because right-wing ideologies can conflict with alignment goals to make models harmless, helpful, and honest (HHH). Safety guidelines, training data, and question framing all influence perceived bias. An executive order titled "Preventing Woke AI" seeks ideologically neutral, truth-seeking models and criticizes diversity, equity, and inclusion as "dogma." Numerous arXiv papers and many search results examine political bias in LLMs.
Read at Theregister
Unable to calculate read time
[
|
]