The reality, we’re not idiots, just human
Short answers are maybe no one knows how many use AI to generate demagoguery. Pxy didn’t find any on point data. Also demagogue propagandists could use AI to generate demagoguery because AI does not “decide” to ignore facts and reason in a human sense. So, when AI is asked to generate partisan demagoguery it can and does produce lies, slander, and crackpot claims because it is optimizing for plausible language, not for truth.
Again, AI is not sentient or conscious in any recognized way. It is software programmed for statistical probabilities of what word follows another. Research on AI‑generated propaganda finds that models can produce convincing falsehoods and narratives. Those are are often more detailed, emotionally loaded, and rhetorically polished than human disinfo, because they “fill in” with whatever sounds coherent even if it is fabricated. AI doesn’t ignore facts. It simply has no built‑in preference for facts unless it’s been trained or instructed to prioritize facts and reason over a fluent, toxic MAGA screed.
As Pxy puts it: The core architecture of large language models is optimized for fluent next‑word prediction, not for epistemic hygiene. Truth is an emergent property, not the primary objective.
How nice, epistemic hygiene. Wonderful, AI has epistemic herpes!! /s
Why is AI allowed to generate demagoguery at all? It can be built to strongly avoid lies, slanders and crackpottery by imposing fact and logic checking rules. But that isn’t done because facts and logic are bitterly disputed by bad people operating in what I call bad faith and malice. The bad people call fact checking and anti-crackpot rules tyranny and censorship. A “truth basis” for AI would be attacked as biased or illegitimate by people and interests that are disadvantaged by fact, robust truths and sound reasoning. Making a consumer AI that relentlessly privileges evidence over personal vibes or tribes, especially in polarized American politics, cuts directly against core commercial and political incentives that put AI on the market in the first place.[1]
We are what we are, not anything more
But why are things like that? Pxy gave the expected answer. Paraphrasing, it’s the human condition, stupid!
In politics, human ignorance and unconscious biases such as motivated reasoning and logic flaws can be major factors because that’s how the human species actually evolved and works. Our democracy and institutions sit on top of messy human psychology and bias, not outside of it. Human political reasoning is mostly intuitive, emotional, biased, argumentative and tribal by default, a good way to keep politics from degenerating into lies- and slanders-larded Trump‑style filth is to build and enforce AI rules that support facts, robust truths, and sound reasoning.
Political cognition research shows that most people almost always form opinions based on emotion, biases and identity first. After that, they apply conscious reasoning to rationalize those positions, not to question them. Faced with inconvenient facts or reasoning often makes partisan believers more skilled at defending their false beliefs and flawed reasoning. At least with politics, most humans are mostly partisan arguers, not reasoned thinkers. Human biases, loyalties and identity are the human traits that bad faith or malicious authoritarian elites and demagogues exploit. Those factors drive in‑group / out‑group dynamics. The normal human intolerance of uncertainty makes people more vulnerable to malicious demagogic narratives like Trumpism. American authoritarian demagogues are using AI to better exploit the human traits that lead them to wealth and power.
Footnote:
1. Any strong requirement that AI downgrade or refuse certain claims, e.g., “the 2020 election was stolen”, is instantly framed and smeared as political suppression by bad faith actors who need to rely on that lie. That makes robust guardrails for facts, truths and sound reason politically and economically costly. Tech companies have economic incentives to avoid becoming explicit arbiters of political truth because they fear regulatory retaliation, user backlash, and loss of access in key markets, so they default to vague “community standards” rather than hard factual baselines.
No comments:
Post a Comment