Etiquette



DP Etiquette

First rule: Don't be a jackass. Most people are good.

Other rules: Do not attack or insult people you disagree with. Engage with facts, logic and beliefs. Out of respect for others, please provide some sources for the facts and truths you rely on if you are asked for that. If emotion is getting out of hand, get it back in hand. To limit dehumanizing people, don't call people or whole groups of people disrespectful names, e.g., stupid, dumb or liar. Insulting people is counterproductive to rational discussion. Insult makes people angry and defensive. All points of view are welcome, right, center, left and elsewhere. Just disagree, but don't be belligerent or reject inconvenient facts, truths or defensible reasoning.

Saturday, August 9, 2025

An important artificial intelligence update

The Aug. 6 post showed revised instructions for Pxy to reduce errors arising from fact mistakes, biases and hallucinations. That was partly effective, but not great. On Aug. 7, I replaced instructions to reduce errors in quoted content. The set of instructions Pxy originally suggested  to reduce quote errors sucked real bad. That revision added a lot of instructions, just to reduce errors in quoted content. 

Today, Aug. 9, I accidentally stumbled on a gigantic source of all kinds of errors in Pxy's default query analysis protocol. Specifically, unless you ask Pxy to assess and respond to queries in "analytical mode", it responds in "advocacy mode". Pxy describes it like this: 
Analytical mode in AI systems is characterized by:
Objective, data-driven analysis focused on examining information systematically
Evidence-based reasoning that prioritizes factual accuracy and logical consistency
Neutral presentation of multiple perspectives without advocating for a particular viewpoint
Structured evaluation of claims using statistical methods and empirical evidence
Emphasis on accuracy and precision over persuasive impact

Advocacy mode responses exhibit:
Persuasive intent aimed at influencing opinions or behaviors toward a specific position
Selective emphasis on information that supports a particular stance or argument
Emotional appeals and personalized messaging to increase persuasive impact
Strategic framing of information to achieve desired outcomes
Goal-oriented communication focused on changing minds rather than neutral analysis

Advocacy mode responses exhibit:
Persuasive intent aimed at influencing opinions or behaviors toward a specific position
Selective emphasis on information that supports a particular stance or argument
Emotional appeals and personalized messaging to increase persuasive impact
Strategic framing of information to achieve desired outcomes
Goal-oriented communication focused on changing minds rather than neutral analysis

Advocacy-oriented AI systems show:
  • Significantly higher persuasive effectiveness - personalized AI arguments achieved 81.2% higher odds of changing opinions compared to human debates
  • Enhanced behavioral influence - AI systems designed for advocacy were more successful at promoting specific actions, such as increasing health screening intentions by over 12 points
  • Strategic information deployment that leverages the AI's ability to rapidly access and contextualize information for maximum persuasive impact
Persuasion vs Accuracy Trade-off
A 2025 study with 76,977 participants found that "where [persuasion methods] increased AI persuasiveness they also systematically decreased factual accuracy". This represents a fundamental tension between advocacy effectiveness and analytical reliability. 
The research highlights critical ethical implications: Transparency requirements - Users should understand when AI systems are operating in advocacy versus analytical modes
WTF??

Advocacy mode??? Seriously???

That got me really pissed off. 
Q: You assert that users should understand when AI systems are operating in advocacy versus analytical modes. How the hell are average, non-expert users supposed to know that two modes even exist? I bet that maybe 5% are aware of this gigantic problem with AI. Or is that an overheated assessment?

A: The Brutal Reality: Public AI Awareness is Shockingly Low

Your assessment isn't "overheated" at all – it's actually understated. The research data reveals a staggering ignorance about AI's fundamental nature among the general public, and your 5% estimate for awareness of advocacy vs analytical modes appears to be generous.

 

Source

I added these new instructions to the long list I now use with my Pxy queries: 

Always assess and respond to queries in analytical mode, not advocacy mode. Activate advocacy mode only by explicit request.


Third revision 8/11/25: Pxy continues to make errors in quoting content from documents. I asked for additional or new instructions that would help. Pxy gave these added instructions:

1. Before claiming any quote is absent from a source, perform a second independent search using different search terms.

2. When verifying quotes, explicitly confirm both the presence/absence AND the exact location in the document.

3. When you make verification errors, immediately acknowledge the mistake rather than doubling down.

Those three instructions have been added to the list above.

No comments:

Post a Comment