Etiquette



DP Etiquette

First rule: Don't be a jackass.

Other rules: Do not attack or insult people you disagree with. Engage with facts, logic and beliefs. Out of respect for others, please provide some sources for the facts and truths you rely on if you are asked for that. If emotion is getting out of hand, get it back in hand. To limit dehumanizing people, don't call people or whole groups of people disrespectful names, e.g., stupid, dumb or liar. Insulting people is counterproductive to rational discussion. Insult makes people angry and defensive. All points of view are welcome, right, center, left and elsewhere. Just disagree, but don't be belligerent or reject inconvenient facts, truths or defensible reasoning.

Sunday, June 4, 2023

A hypothesis about AI subverting elections: Clogger vs. Dogger

The Conversation published an article by Harvard professors Archon Fung (Professor of Citizenship and Self-Government, Harvard Kennedy School) and Lawrence Lessig (Professor of Law and Leadership) about a potential artificial intelligence (AI)-driven political campaign:
How AI could take over elections – and undermine democracy

Imagine that soon, political technologists develop a machine called Clogger – a political campaign in a black box. Clogger relentlessly pursues just one objective: to maximize the chances that its candidate – the campaign that buys the services of Clogger Inc. – prevails in an election.

While platforms like Facebook, Twitter and YouTube use forms of AI to get users to spend more time on their sites, Clogger’s AI would have a different objective: to change people’s voting behavior.

As a political scientist and a legal scholar who study the intersection of technology and democracy, we believe that something like Clogger could use automation to dramatically increase the scale and potentially the effectiveness of behavior manipulation and microtargeting techniques that political campaigns have used since the early 2000s. Just as advertisers use your browsing and social media history to individually target commercial and political ads now, Clogger would pay attention to you – and hundreds of millions of other voters – individually.

It would offer three advances over the current state-of-the-art algorithmic behavior manipulation. First, its language model would generate messages — texts, social media and email, perhaps including images and videos — tailored to you personally. Whereas advertisers strategically place a relatively small number of ads, language models such as ChatGPT can generate countless unique messages for you personally – and millions for others – over the course of a campaign.

Second, Clogger would use a technique called reinforcement learning to generate a succession of messages that become increasingly more likely to change your vote. Reinforcement learning is a machine-learning, trial-and-error approach in which the computer takes actions and gets feedback about which work better in order to learn how to accomplish an objective. Machines that can play Go, Chess and many video games better than any human have used reinforcement learning.

Third, over the course of a campaign, Clogger’s messages could evolve in order to take into account your responses to the machine’s prior dispatches and what it has learned about changing others’ minds. Clogger would be able to carry on dynamic “conversations” with you – and millions of other people – over time. Clogger’s messages would be similar to ads that follow you across different websites and social media.

The messages that Clogger sends may or may not be political in content. The machine’s only goal is to maximize vote share, and it would likely devise strategies for achieving this goal that no human campaigner would have thought of.

One possibility is sending likely opponent voters information about nonpolitical passions that they have in sports or entertainment to bury the political messaging they receive. Another possibility is sending off-putting messages – for example incontinence advertisements – timed to coincide with opponents’ messaging. And another is manipulating voters’ social media friend groups to give the sense that their social circles support its candidate.

Clogger has no regard for truth. Indeed, it has no way of knowing what is true or false. Language model “hallucinations” are not a problem for this machine because its objective is to change your vote, not to provide accurate information.

If the Republican presidential campaign were to deploy Clogger in 2024, the Democratic campaign would likely be compelled to respond in kind, perhaps with a similar machine. Call it Dogger. If the campaign managers thought that these machines were effective, the presidential contest might well come down to Clogger vs. Dogger, and the winner would be the client of the more effective machine.

Political scientists and pundits would have much to say about why one or the other AI prevailed, but likely no one would really know. The president will have been elected not because his or her policy proposals or political ideas persuaded more Americans, but because he or she had the more effective AI. The content that won the day would have come from an AI focused solely on victory, with no political ideas of its own, rather than from candidates or parties.  
In this very important sense, a machine would have won the election rather than a person. The election would no longer be democratic, even though all of the ordinary activities of democracy – the speeches, the ads, the messages, the voting and the counting of votes – will have occurred.

The AI-elected president could then go one of two ways. He or she could use the mantle of election to pursue Republican or Democratic party policies. But because the party ideas may have had little to do with why people voted the way that they did – Clogger and Dogger don’t care about policy views – the president’s actions would not necessarily reflect the will of the voters. Voters would have been manipulated by the AI rather than freely choosing their political leaders and policies.

Another path is for the president to pursue the messages, behaviors and policies that the machine predicts will maximize the chances of reelection. On this path, the president would have no particular platform or agenda beyond maintaining power.  
It would be possible to avoid AI election manipulation if candidates, campaigns and consultants all forswore the use of such political AI. We believe that is unlikely. If politically effective black boxes were developed, the temptation to use them would be almost irresistible. Indeed, political consultants might well see using these tools as required by their professional responsibility to help their candidates win. And once one candidate uses such an effective tool, the opponents could hardly be expected to resist by disarming unilaterally.  
The possibility of a system like Clogger shows that the path toward human collective disempowerment may not require some superhuman artificial general intelligence. It might just require overeager campaigners and consultants who have powerful new tools that can effectively push millions of people’s many buttons.
This raises interesting questions. Will politicians and their campaign embrace AI-driven campaigning? Hell yes. The temptation to use AI would be irresistible, not merely almost irresistible. That seems obvious.

The authors seem to imply that AI would be untethered from facts and truth. That's probably true for forces who are comfortable with, deceit, lies, slanders, etc. But I imagine that respect for truth can be built into honest AI if that's what a campaign wants.

But then one can see a possible split. The AI of the side more accepting of and reliant on dark free speech (DFS) would have few or no limits on its rhetoric. DFS AI could say things like the 2020 election was stolen and if Trump loses in 2024, that will have been a rigged election too. Can honest AI effectively counteract DFS AI? 

Or, would the temptation of DFS AI unleashed by one side simply overwhelm morals and force the other side to reply in kind with its own DFS counter AI? Can honest AI be just as effective as DFS AI in politics? These are fascinating questions. I wonder if any of that has been tested by AI researchers.

Moral qualms aside, we are going to see AI tried out, honest or not. Another question will be, will we even know what is AI generated and what isn't? Probably not, especially from the dishonest side.

Another question is this: How different is a DFS-driven AI campaign from the kind of DFS campaign that people like Trump routinely rely on? Trump and radical right propagandists routinely test out rhetoric to see what works and what doesn't. If a certain lie or slander is most effective, then that is what is used. Truth has nothing to do with those tests. Truth is almost completely irrelevant. Public response is far more relevant than truth.

Maybe the ultimate questions are (i) how much better can AI learn to win compared to humans, and (ii) can honest AI match DFS AI, or are we doomed to right DFS AI vs. left DFS AI warfare? Or, are we just doomed to spin dictatorship?

We live in interesting times.


Is it human or is it AI? 
Or, does it even matter?

No comments:

Post a Comment