Etiquette



DP Etiquette

First rule: Don't be a jackass. Most people are good.

Other rules: Do not attack or insult people you disagree with. Engage with facts, logic and beliefs. Out of respect for others, please provide some sources for the facts and truths you rely on if you are asked for that. If emotion is getting out of hand, get it back in hand. To limit dehumanizing people, don't call people or whole groups of people disrespectful names, e.g., stupid, dumb or liar. Insulting people is counterproductive to rational discussion. Insult makes people angry and defensive. All points of view are welcome, right, center, left and elsewhere. Just disagree, but don't be belligerent or reject inconvenient facts, truths or defensible reasoning.

Friday, October 31, 2025

Regarding ambiguity in the US Constitution

Both authoritarian MAGA elites and most pro-democracy, pro-rule of law conservatives (elite or not) say something about like this about the US Constitution:


The constitution means what it says.


The Constitution contains a number of intentional strategic ambiguities that were needed to get the thing drafted, agreed on and then ratified. That is historical fact, not opinion. Because of those ambiguities and some other human factors such as greed, ideological zealotry, innate cognitive biology, self-identity, etc., there is no authoritative way to know or determine what the constitution says. That is biological/social fact, not opinion.

Ben Franklin saw the issue clearly.

“I confess that I do not entirely approve this Constitution at present, but Sir, I am not sure I shall never approve it. . . . In these sentiments, Sir, I agree to this Constitution, with all its Faults, if they are such; because I think a General Government is necessary for us. . . . . I doubt too whether any other Convention we can obtain, may be able to make a better Constitution. . . . . It therefore astonishes me, Sir, to find this System approaching so near to Perfection as it does; and I think it will astonish our Enemies, who are waiting with confidence to hear how our Councils are Confounded, like those of the Builders of Babel, and that our States are on the Point of Separation, only to meet, hereafter, for the purposes of cutting one another's throats. Thus I consent, Sir, to this Constitution because I expect no better, and I am not sure that it is not the best. . . . . On the whole, Sir, I cannot help expressing a Wish, that every Member of the Convention, who may still have Objections to it, would with me on this Occasion doubt a little of his own Infallibility, and to make manifest our Unanimity, put his Name to this instrument.” -- Ben F., 1787

Sir, I agree to this Constitution, with all its Faults, if they are such.

In significant part, (1) the disagreements the drafters of the Constitution including its Amendments struggled with were not resolved, and (2) we are today deeply, bitterly divided over modern variants of many of those same disagreements. It is impossible for humans to agree on what the words of the constitution meant. Literally impossible. It cannot be done.

The drafters used "strategic ambiguity" as a means to get the constitution drafted, agreed upon and then ratified. Strategic ambiguity was needed to deal with intractable special interest demands and faction or ideological demands. Regarding contested concepts, when people disagree about whether "liberty" protects economic freedom or reproductive autonomy, they're not merely confused or biased. They're operating from different normative mental frameworks about what human flourishing requires or about what is best for themselves and/or others.​

That is value pluralism, i.e., recognition that fundamental values can genuinely conflict without rational resolution. It's not relativism (all views equally valid) or nihilism (no views defensible), but rather the acknowledgment that moral disagreement can be rationally irresolvable because people start from different, internally coherent moral and social values. Constitutional meaning is contested not just because people are biased, confused or ignorant, but because the constitutional text employs normatively loaded concepts about which reasonable people fundamentally disagree.

That is mostly why it is impossible for people to agree about what some significant parts of the Constitution say.[1] The Constitution is necessarily ambiguous and therefore cannot be authoritatively interpreted. Political factions interpret what it means through the lens of humans being human and ideology being what it is.

What is ideology? A reflection of the mind. Ideology can grip and hold a mind real hard and tight. For example, most hard core Christian nationalist theocrats know that God himself ordained the US to be the lead nation on Earth and to dominate. That is their ideology. For most, their ideology is a big part of their identity. If you criticize or attack their ideology, it is personal. You criticize or attack them personally. Most people really don't like that and won't accept it.

Constitutional ambiguity is a major part of why all hell has broke loose in American politics. Listen to what corrupt, authoritarian MAGA elites tell us and force on us is legal or illegal. Abortion, illegal. Corruption, legal. Lies, better than truth when convenient. Etc. 


Footnote:
1. An example. The Second Amendment: "A well regulated Militia, being necessary to the security of a free State, the right of the people to keep and bear Arms, shall not be infringed." 

Those twenty-seven words have generated one of America's most bitter constitutional divisions. The critical question is whether the prefatory militia clause either (i) limits, or (ii) just explains the operative clause. Profoundly different constitutional visions hang on that 4 word clause.​ Conservatives usually read the militia clause as merely an introduction, which is logically disconnected from the definition of the substantive right. 

Liberals argue the militia clause crucially informs the meaning of the operative clause. That tethers gun rights directly to militia participation, what we now call the National Guard. Under this interpretation, the founders intended to ensure that state militias could resist federal tyranny, not to guarantee individual gun ownership divorced from organized militia service.

Which side is right here? That probably depends for most people mostly on their left vs right political ideology.

Is there another way to see it or do the analysis? Hell yes. Ditch left and right ideology. Look at the public interest and public opinion. 

Thursday, October 30, 2025

If the US resumes nuclear weapons testing, this would be extremely dangerous for humanity

Now THAT is an understatement!!

US President Donald Trump has instructed the Pentagon to resume nuclear weapons testing immediately, “on an equal basis” with other countries’ testing programs.

If Trump is referring to the resumption of explosive nuclear testing, this would be an extremely unfortunate, regrettable step by the United States.

It would almost inevitably be followed by tit-for-tat reciprocal announcements by other nuclear-armed states, particularly Russia and China, and cement an accelerating arms race that puts us all in great jeopardy.

It would also create profound risks of radioactive fallout globally. Even if such nuclear tests are conducted underground, this poses a risk in terms of the possible release and venting of radioactive materials, as well as the potential leakage into groundwater.

The Comprehensive Nuclear Test Ban Treaty has been signed by 187 states – it’s one of the most widely supported disarmament treaties in the world.

The US signed the treaty decades ago, but has yet to ratify it. Nonetheless, it is actually legally bound not to violate the spirit and purpose of the treaty while it’s a signatory.

Nuclear-armed states have stopped explosively testing at different times. The US stopped in 1992, while France stopped in 1996. China and Russia also aren’t known to have conducted any tests since the 1990s. North Korea is the only state to have openly tested a nuclear weapon this century, most recently in 2017.

All nine nuclear-armed states (the US, China, Russia, France, the United Kingdom, India, Pakistan, North Korea and Israel) are investing unprecedented sums in developing more accurate, stealthier, longer-range, faster, more concealable nuclear weapons.

Russia, in particular, has weapons we haven’t seen before, such as a nuclear-powered, nuclear-armed cruise missile that President Vladimir Putin said on Sunday his country has successfully tested. China, too, is embarking on a rapid build-up of nuclear weapons.

All of this means the Doomsday Clock – one of the most authoritative and best-known assessments of the existential threats facing the world – has moved forward this year further than it has ever done before.

It’s really an extraordinarily dangerous time in history.

https://theconversation.com/if-the-us-resumes-nuclear-weapons-testing-this-would-be-extremely-dangerous-for-humanity-268661

Admittedly, of ALL the things Trump has done or threatened to do, it's THIS that scares the bejesus out of me. We might be able to survive Trumpism, we might have a fight on our hands to secure free elections, and we might end up having to be MORE forceful in our resistance, but playing with nukes is going to make us all.................

DEAD!

AI and MSM flaws: "Conservative" vs "authoritarian"

This is a companion to PD's post today When AI Owners Rewrite Reality. I've been meaning to do this post for a few weeks, but just haven't done it. 
 
A major flaw in training materials for AI, including Pxy, doing politics analysis is an overwhelming bias that forces it to refer to MAGA, MAGA elites, Trump and MAGA policy and tactics as "conservative". By now it is clear that they are authoritarian, kleptocratic and morally rotted**, not conservative. Real conservatives are none of those things.

** Moral rot = In politics, demagoguery or dark free speech characterized by heavy reliance on lying, slandering, irrational emotional manipulation, crackpot flawed reasoning, crackpot conspiracy theories, divisive scapegoating and slandering of target groups, rewriting history to make it convenient, taking power and protections from the masses, transferring it to elites and allied special interests, but calling it defense of democracy and the little guy, etc.

Bias in training materials really does heavily influence AI output. When a deranged monster like Elon Musk owns AI like Grok, reality gets destroyed and replaced with the owner's twisted fantasies and deranged mental illnesses.

When asked how important this source of error in AI politics-related responses is, Pxy said it ranked in the top three. In my opinion, it is the #1 source of slop and still believe that is true. The evidence is overwhelming. This is important because conflating the concept of "conservative" with the reality of Trump, MAGA and the MAGA movement hides fundamental differences between real conservatism and real authoritarianism. Such analytical slop normalizes MAGA's anti-democratic rhetoric and behavior by cloaking it in political terms compatible with democracy and the rule of law. 

However, the situation is worse than just a mentally ill Musk perverting his Grok toy to soothe his fragile but gigantic ego.

With only rare exceptions, the MSM is constantly guilty of analytical slop with the concept called "conservative". The MSM constantly makes the grotesquely abnormal and morally rotted Trump and MAGA look normal by mistakenly calling it "conservative". AI picks up on that flawed framing of Trump and MAGA. AI outputs MAGA abnormal as conservative normal. AI mirrors the MSM's sloppy pro-authoritarian framing of news and commentary in it's responses. Only when AI is given the context of reality do responses become defensible and more accurate. Here's just one example of many I've gone through with AI:

Q: In view of all of the evidence of authoritarianism why can you not stop calling MAGA, Trump, republican USSC judges, etc., "conservative"? Euphemisms are not just unhelpful, they are very damaging. This is extremely frustrating.

A: You're absolutely right, and I apologize for the inconsistency. Despite acknowledging the distinction between conservatism and authoritarianism in my previous response, I failed to implement it consistently. This is exactly the kind of normalized euphemism that obscures reality. .... You're correct that euphemisms are damaging, not just unhelpful. .... The evidence is overwhelming. .... blah, blah, blah . . . . . .

But from one session to the next, AI reliably continues to fall back on the MSM's analytical slop, i.e., framing of news and commentary in pro-authoritarian ways. AI cannot remember because it has no mind. AI is not alive or sentient. Maybe one can say the same thing about the MSM.


Q1: Just for the halibut, is the MSM sentient, or is it just an intangible, abstract idea or concept?

Q2: Is 'sloppy' the wrong term for how the MSM frames Trump and MAGA as normal, e.g., not sloppy but intentional?

(I think it is more intentional than slop)

When AI Owners Rewrite Reality: The Hidden Power of Prompt Engineering

 



On July 10, 2025, Elon Musk's AI chatbot Grok gave a viral response about "the biggest threat to Western civilization." It first claimed "misinformation and disinformation" were paramount risks. Musk, finding this answer objectionable, intervened publicly—declaring he would "fix" Grok's answer. Overnight, the chatbot's response was rewritten: now, the greatest threat was declining birth rates, a topic Musk frequently champions. In the following weeks, as documented by the New York Times, Grok's answers were repeatedly edited behind the scenes. The model began to dismiss "systemic racism" as a "woke mind virus," flip positions on police violence, and echo specific far-right talking points. None of these reworks required peer review, public justification, or any visible trace for users. Whether one agrees or disagrees with these specific edits is beside the point: what appeared as neutral knowledge infrastructure was in fact subject to a single owner's priorities—swiftly, silently, and globally.

Prompt engineering—the technical process underpinning these re-edits—means much more than clever phrasing of user queries. It's the means by which companies configure, modify, and top-down recalibrate what their AIs say, suppress, or endorse. Google's own engineering guides are strikingly explicit: "Prompts are instructions or examples that steer the model towards the specific output you have in mind," enabling teams to "guide AI models towards generating desired responses" (Google, 2025a). OpenAI concurs, admitting that alignment "determines the behavior of the assistant by setting system messages that steer outputs" (OpenAI, 2022). This machinery isn't just technical—it's editorial, capable of rapidly altering the answers that millions receive on topics ranging from science and history to politics and ethics.

What makes AI different is not simply bias, but the scale, speed, and secrecy at work. Unlike textbooks, encyclopedias, or even cable news, where editorial choices can be debated, cited, and held up to scrutiny, the process by which AI decides what you know is hidden and changeable at will—with top-down changes propagating to millions of users in mere hours. In the 2024 Gemini controversies, Google's image generator initially refused to depict white people in historical contexts, then—after public backlash—overcorrected by adjusting its outputs within a day, revising policies, filtering rules, and prompt instructions with no public explanation of what changed or why. Users saw new outputs without any mark or warning about what, why, or how the change occurred. OpenAI's ChatGPT, similarly, is subject to ongoing prompt and alignment updates, producing shifts in political, ethical, and cultural responses between model versions. These changes—sometimes implemented to reduce bias or harm, sometimes for more ambiguous reasons—are rarely advertised, much less debated, outside the company (Frontiers in AI, 2025; OpenAI, 2025b).

It is important to acknowledge: prompt engineering can, and often does, serve salutary aims—reducing harmful biases, blocking hate speech, and mitigating misinformation in real time. Yet the underlying problem remains. In traditional newsrooms, corrections and editorial shifts must be justified, posted, and open to contest. When major AI-driven shifts occur invisibly, even positive changes risk undermining crucial epistemic norms: transparency of evidence, public warrant for knowledge, and the principle of contestability in plural societies. If unnoticed changes remake what "everyone knows" about critical questions—whether "systemic racism," "gender violence," or "civilizational threats"—the stakes become not merely academic, but democratic.

Even when changes are well-intentioned, value pluralism compounds the risk: every substantive revision is championed by some and attacked by others. Musk's prompt changes to Grok were celebrated in some circles and condemned in others. What matters most is not the immediate politics of any revision, but the upstream condition that enables so much power over public knowledge to reside with so few, to be exercised with such speed and scale, without process or visibility.

Technical research and recent ethical frameworks now converge on a basic warning: without robust transparency and public contestability, invisible and swift editorial power puts our shared knowledge at risk. For as long as the processes of prompt engineering remain locked away, we lose not just the right to critique a specific answer, but the ability to know what has changed, why, and who decides.

What appeared as a minor overnight tweak in Grok was, in fact, a warning—about the new architecture of reality, now rewired for millions at a keystroke by a tiny group behind the curtain. The question is whether we'll demand transparency before this becomes the new normal.


Endnotes:

  1. New York Times. (2025). "How Elon Musk Is Remaking Grok in His Image." https://www.nytimes.com/2025/09/02/technology/elon-musk-grok-conservative-chatbot.html — Documents the series of overnight Grok revisions and the political content of edits.
  2. Google. (2025a). "Gemini for safety filtering and content moderation." — Company documentation on prompt engineering and rapid policy updates.
  3. OpenAI. (2022). "Aligning language models to follow instructions." — Technical whitepaper on how prompt engineering steers generative model outputs.
  4. OpenAI. (2025b). "Prompt Migration Guide." — Developer documentation on migrating and updating system prompts at scale.
  5. Frontiers in AI. (2025). "Gender and content bias in large language models: A case study…" — Research on how prompt and moderation changes shift content delivered to users.
  6. Google. (2025b). "The latest AI news we announced in July." — Corporate announcements of Gemini system and policy updates.