Etiquette



DP Etiquette

First rule: Don't be a jackass.

Other rules: Do not attack or insult people you disagree with. Engage with facts, logic and beliefs. Out of respect for others, please provide some sources for the facts and truths you rely on if you are asked for that. If emotion is getting out of hand, get it back in hand. To limit dehumanizing people, don't call people or whole groups of people disrespectful names, e.g., stupid, dumb or liar. Insulting people is counterproductive to rational discussion. Insult makes people angry and defensive. All points of view are welcome, right, center, left and elsewhere. Just disagree, but don't be belligerent or reject inconvenient facts, truths or defensible reasoning.

Monday, August 12, 2019

Essentially Contested Concepts: What is Hate?



Essentially contested concepts involve widespread agreement on a concept (e.g., hate, fairness, constitutional, legal, moral, good, evil, etc.), but not on the best realization or definition thereof. They are concepts the proper definition or use of which inevitably involves endless disputes about their proper definitions or uses on the part of their users. These disputes cannot be settled by appeal to empirical evidence, linguistic usage, or the canons of logic alone. The disputes are unresolvable, but unfortunately are quite common in politics. Disputes over essentially contested concepts cannot be resolved by anything other than compromise, an imperfect resolution, because the definitions are heavily influenced by personal cognitive and social factors such as morals, political ideology, and social- and self-identity.

A Washington Post article discusses whether the hate group list that the Southern Poverty Law Center (SPLC) has compiled is fair, dangerous or otherwise detrimental. The article starts with a member of the Family Research Council (FRC) pointing out the bullet holes in the group's lobby. The FRC, a conservative Christian anti-abortion, anti-same sex marriage advocacy and political lobbying group, is listed by the SPLC as a hate group. A deranged man with a gun came to kill people in the FRC because the FRC was on the SPLC hate list.

Is it fair or safe to identify groups like the FRC with the same language, hate group, as the Klu Klux Klan? What is the definition of hate in the context of politics?

The WaPo writes: “‘Labeling people hate groups is an effort to hold them accountable for their rhetoric and the ideas they are pushing. Obviously the hate label is a blunt one,’ Cohen concedes when I ask whether advocates like the FRC, or proponents of less immigration like the Federation for American Immigration Reform (FAIR), or conservative legal stalwarts like the Alliance Defending Freedom (ADF), really have so much in common with neo-Nazis and the Klan that they belong in the same bucket of shame. “It’s one of the things that gives it power, and it’s one of the things that can make it controversial. Someone might say, ‘Oh, it’s without nuance.’ … But we’ve always thought that hate in the mainstream is much more dangerous than hate outside of it. The fact that a group like the FRC or a group like FAIR can have congressional allies and can testify before congressional committees, the fact that a group like ADF can get in front of the Supreme Court — to me that makes them more dangerous, not less so. … It’s the hate in the business suit that is a greater danger to our country than the hate in a Klan robe.’”

Context: For context, the FRC operates ‘crisis pregnancy centers’, which are set up in poor neighborhoods. From the outside, they appear to be medical centers that provide professional medical access to abortion services. These centers have been called unethical for deceiving pregnant women by applying pressure tactics that range from lying about abortion options, e.g., falsely telling a woman that abortion is illegal or unavailable, to exerting intense psychological pressure to prevent a woman from having an abortion. These centers often seek to delay long enough so that a woman is forced by law to give birth. People running crisis pregnancy centers typically have no formal medical training at all and instead are Christian activists in white lab coats trying to prevent abortions by any means possible short of illegal actions such as threats of physical violence.

In view of lies, deceit and misery that crisis pregnancy centers were inflicting on low income women who were being tricked into bearing a child, California passed a law “intended to compel crisis pregnancy centers (CPCs) to offer factual information about all options available to pregnant women and to disclose if a facility is unlicensed. . . . . NPCC asserts that 91% of unlicensed CPCs provided defective medical information such as a false link between abortion and breast cancer or suicide.”

What is hate? Do deceit-driven tactics related to abortion, like what the FRC and other groups engage in, amount to hate? Do other activities such as lobbying congress and mounting legal challenges to abortion or same-sex marriage amount to hate?

Hate (verb): to feel intense or passionate dislike for someone, a concept, e.g., the idea of abortion, or something.
Hate (noun): an intense or passionate dislike or loathing for someone, a concept or something.

Clearly, lobbying congress and mounting legal challenges are legal political activities. Can legal activities amount to hate? If it isn't hate, what can it more reasonably be called? Aggressive conservative or Christian activism? Immorality or unethical?

It appears that much or most of the activities the groups on the SPLC’s hate list amount to mostly legal activism infused with a rigid unwillingness to compromise. If one believes that, for politics in a liberal democracy, compromise is a core moral value and necessary for democracy to function properly (a concept or belief advocated here), then a refusal to compromise can be seen as immoral.

Is immorality the same as hate? If the definitions of hate given above are generally accepted as maybe incomplete but generally accurate enough, then it would logically seem that refusal to compromise alone will often or usually include a component of hate in it. Is that reasoning sound or flawed? Is compromise the only or best form of resolution for disputes over contested concepts?

The WaPo is right to raise this issue. A deranged man with a gun used the SPLC hate list to find a target for murder. That would seem to be no different than president Trump continually referring to journalists as ‘the enemy of the people’, thereby inciting a few people to begin to act to kill journalists. Is that hate?

If nothing else, one can see from the foregoing why essentially contested concepts lead to intractable disputes and how the disputed concepts can foster actions that lead to misery or even social conflict and outright murder. Essentially contested concepts can be dangerous because of the heavy cognitive (moral) and social (identity and social context) loads they carry. From that point of view, it is easy to see why (i) disagreements over essentially contested concepts are not resolvable, and (ii) compromise must necessarily be a pillar of peaceful, non-tyrant, democratic society.

B&B orig: 11/15/18

Propaganda, Social Media & A Weakening Union



Managing editor Mark Gimein’s essay in November 16 issue of The Week is interesting.

“How you whip up hatred and distrust has never been much of a secret. More than 50 years ago, Jaques Ellul, in his landmark book Propaganda, wrote, ‘Those who read the press of their group and listen to the radio of their group are constantly reinforced in their allegiance. They learn more and more that their group is right, and that it’s actions are justified; thus their beliefs are strengthened.’ Substitute ‘tweets’ and ‘memes’ and you have social media today, in which an algorithm feeds you the information you are likely to click on -- because you have clicked or retweeted or reposted something j ust like it. The techniques that once worked on TV and radio have been supercharged by microtargeting. This is not merely an echo chamber: It’s a pinball machine, into which manipulators cynically drop memes -- the Black Panthers support the democrats! -- to bounce around and amplify.

The government of the US was constructed, as James Madison wrote, ‘to break and control the violence of faction’. Now faction is ascendant, and it is the union that is breaking. There are no more big tents. Centrist Republicans such as Bob Corker and Jeff Flake have quit politics; centrist red-state democrats Claire McCaskill and Heidi Heitkamp didn’t survive Tuesday’s vote. In congress, it becomes harder for elected representatives to do anything but vote in lockstep with their parties. When Donald Trump was elected, it was said that he had ‘broken’ the republican party. The opposite is true: The parties are stronger than ever. Except now party loyalty is enforced by you own friends and acquaintances, who will make sure you don’t step out of line on Twitter or Facebook. That’s something that autocrats[1] and demagogues of the past could only dream of. How else can the dark powers of social media be manipulated and misused? In the coming two years of divided government, we will most likely find out.”

When Gimein asserts that how to whip up hatred and distrust is common knowledge, he seems to miss the mark. America has witnessed the whipping up of hatred and distrust to an amazing extent since President Trump came to power.[2] The minds now driven by hate and distrust do not know that they have been manipulated and used. They think that happened to the opposition, not themselves. Manipulators certainly know how to do it. But if everyone knew the trick, it would be harder for that manipulated mindset change to happen on such a large scale in such a short time.

This is an example of what can happen to a society whose people are untrained in defense against the dark arts. The American people are, for the most part, defenseless against manipulation by dark free speech** operating ways that social media make more effective than ever before.

**Dark free speech: Lies, deceit, misinformation, unwarranted opacity and truth hiding, unwarranted emotional manipulation, mostly fomenting fear, anger, hate, distrust, and/or disgust, bogus (partisan) logic, unwarranted character assassination, etc.

Gimein’s reference to Bob Corker and Jeff Flake as ‘centrist Republicans’ reflects the power of rhetoric to obscure unreasonable extremism in the mantle of a reasonable-sounding label like centrist in the context of the republican party. By standards of 25-30 years ago, Corker and Flake would have been seen as far right conservatives on most issues by the republican party. There is nothing centrist about them now. Sure, on a few occasions they ineffectively squeaked at their colleagues in feeble protest over something or another, but it didn't amount to a hill of beans.[3] They both voted the republican way about 84% of the time. By no stretch of the imagination of anyone neutral is it possible to argue that there was not extremism in many of those votes.

That someone today refers to Corker and Flake as ‘centrist Republicans’, shows how extreme the republican party has become and how well the right and/or trapped minds has obscured that fact. Gimein is deceived and wrong. A better label for folks like Corker and Flake is something along the lines of far right republican, with the rest of the party being extreme right. The concept of centrism has no place in the republican party at present. Decades of RINO hunts have insured a thorough ideological and moral cleansing.

Finally, Gimein asks a question with an interesting tell in it: “How else can the dark powers of social media be manipulated and misused?”

Mr Gimein apparently disapproves of dark free speech being deployed on social media to deceive and manipulate the public. Otherwise he would not see it as manipulation or misuse. Presumably, he also sees the same tactics on all other sources of media the same way. For better or worse, there is not a thing anyone can do about it. It is all constitutionally protected free speech, no matter how dark and deadly it is. Therein lies democracy’s greatest weakness.

Footnotes:
1. Gimein made a mistake by referring to autocrats and demagogues in the same breath. As we all know, that pairing seems discordant with Aristotle’s taxonomy of political regimes. Gimein probably meant either autocrats and monarchs or, more likely, he meant oligarchs and demagogues.



2. Yes, partisan hate and distrust had been building for decades, especially since influencers like Lee Atwater and Newt Gingrich injected their poison into politics a few decades ago. Trump brought the emotion to a whole new, more toxic level. It is reasonable to think that Trump was probably helped significantly by years of Russian propaganda fomenting hate and distrust among the American people. That said, it is far and balanced to give Trump most of the credit for us being where we are today. As long as he is in power, the buck stops with Trump whether he likes it or not.

3. For example, both Corker and Flake voted for the nuclear option for supreme court nominees, thereby killing the filibuster. That was not centrist, not even close.

B&B orig: 11/17/18

The Anti-Bias Ideology: A Simplified Explanation



CONTEXT: A reasonable belief holds that existing political ideologies are more bad than good for various reasons related to cognitive biology and social behavior and influences. That is what B&B argues. Ideologies tend to foster in-group thinking and behaviors and that tends to make it easy to distort reality, facts, truths and thinking into beliefs that are unreasonably detached from reality, facts and truths. It makes politics more irrational than it has to be. One idea would propose that people simply adopt a science mindset that looks to impose more rationality into politics.

In his blog post at Neurologica entitled, Against Ideology, skeptic Steven Novella discusses some thinking about problems with existing political ideologies. Novella comments on problems with ideology and the exhilarating experience of walking away from one: “The skeptical movement has always struggled with some unavoidable ironies. We are like a group for people who don’t like to join groups. We actively tell our audience not to trust us (don’t trust any single source – verify with logic and evidence). Our belief is that you really should not have beliefs, only tentative conclusions. Essentially, our ideology is anti-ideology.

This approach is both empowering and freeing. One of the most common observations I hear from those who, after consuming skeptical media for a time, abandon some prior belief system or ideology, is that they feel as if a huge weight has been lifted from their shoulders. They feel free from the oppressive burden of having to support one side or ideology, even against evidence and reason. Now they are free to think whatever they want, whatever is supported by the evidence. They don’t have to carry water for their ‘team.’

At the same time, this is one of the greatest challenges for skeptical thinking, because it seems to run upstream against a strong current of human nature. We are tribal, we pick a side and defend it, especially if it gets wrapped up in our identity or world-view.”

That, in a nutshell, is one of the biggest problems with standard ideologies, all of which are fairly called ‘pro-bias’ ideologies. Existing ideologies are powerful motivators to distort reality, facts, truths, and reason whenever those any of those things contradict or undercut the chosen ideology. Distortion and ensuing irrationality is probably the norm, not the exception.

----------------------------------------------------------------------------------------------------------------------
The Anti-Bias Ideology: A Simplified Explanation 
Some years ago, it made sense to reject ideology as a framework for doing and thinking about politics. The science mindset of pragmatic, evidence-driven trial, error and course corrections seemed to be the best approach. Then, after some years of looking into cognitive biology and social behavior, it seemed that one cannot eliminate emotion and morals from the process. That lead to a science- and morals-based 'anti-bias' political ideology that focuses on the the key sources of irrationality, incivility and failure. Four core moral principles seem to be the most anti-biasing. The morals are (i) fidelity to trying seeing fact and truth with less bias, (ii) fidelity to applying less biased conscious reason to the facts and truths, (iii) service to the public interest (defined as a transparent competition of ideas among competing interests) based on the facts, truths and reason, and (iv) willingness to reasonably compromise according to political, economic and environmental circumstances point to.

After considering politics through human history, most or all bad leaders (tyrants, oligarchs, kleptocrats, etc.) seem to share the four key traits. They generally disregard, deny or hide facts and truths when it is politically convenient to do so, which is most of the time. Bad leaders also routinely apply biased (bogus) reasoning to facts, fake or not, typically to foment unwarranted emotional responses such as fear, anger, bigotry, racism and distrust-hate toward out-groups or ‘the enemy’. All of that irrationality is focused in service to a corrupt self-serving conception of the public interest, and it is reinforced by a corrupt, self-serving unwillingness to compromise.

If one accepts that those four bad traits of bad leaders are real and the norm, then arguably the four core moral values of a pragmatic, evidence-based anti-bias political ideology would seem to make sense if one wants to fight against the rise and ability of bad leaders to gain power and then do bad things to people and societies.

The question is, would this ‘anti-bias’ mindset or ideology work. Maybe. Maybe not. The experiment appears to not have been tried in modern times with modern means for mass communication of dark free speech (lies, deceit, unwarranted opacity, unwarranted emotional manipulation, mostly fomenting unwarranted fear, intolerance, anger, and hate, etc.). Testing an anti-bias ideology for success or failure is a multi-generational social engineering experiment. It would be great to see it tried. Even if it failed, the failure might shed enough light on the human condition and politics to reveal another more civilized, sustainable and efficient way to do politics.

Anti-bias is not just the scientific method applied to politics: The anti-bias ideology isn't just adoption of a scientific method mindset. It expressly includes moral values and treats them as such. In science, there tends to be less outright lying and grossly bogus reasoning. Those things tend to get called out and careers then tend to crash and burn if a course correction isn't made. In science, errors happen, but they are typically mistakes, not lies. Flawed reasoning in science tends to be honest support of a hypothesis, not sloppy thinking in defense of an indefensible ideological belief. In these regards, the anti-bias ideology directly accounts for human nature. Science tends to downplay that in a belief that fact and logic will quench errors to a reasonable extent. That may be generally true for science, but it is clearly not true for politics. Science and politics are simply not the same thing, at least not yet with existing pro-bias ideologies that dominate.

White-faced whistling ducks guarding the waterfall

B&B orig: 12/8/18

The Slippery Business of Bias and the Media

Author: Maistriani the Machiavellian

Snow Leopard, endangered species of uncontrolled poaching

Our brother Germaine has asked me to "try" to put out a post on evaluating media reporting with a systematic process. Mind you, I'm wordy, in case none have noticed, and maybe a bit towards the esoteric lean with my writing style and word choices. He promised to beat me with multiple implements of cooked pasta, delete all my posts and ban my account if I didn't bring it down a notch, so I'll try. References will be at the end, for those interested.

BIAS

Seeing as we are talking about media and the language they use, I'll go with a definition of writer's bias:
Bias occurs when a writer displays a partiality for or prejudice against someone, something, or some idea. Sometimes biases are readily identifiable in direct statements. Other times a writer's choice of words, selection of facts or examples, or tone of voice reveals his or her biases.

That's not bad, but I've been really looking into language bias for about the last year solidly and what becomes apparent is that it is quite a bit more involved. We generally tend to look at bias through the socio-cultural prisms of offensive or vilifying statements, propositions or words that directly indicate an attack of a person or social grouping by race, sex, gender identification, religion, etc.

That's the right in your face kind, but what if there is more? Much more, and it is sublime because even though we don't get the conscious "bump - hey that's not right", there is a non-conscious affective outcome that we very often feel as specific emotion towards, but are very unlikely to be able to point to a cause. Here's a short list of what I would term "occult language modifiers":

1. Factive verbs: are verbs that presuppose the truth of their complement clause.

2. Implicative verbs: : implicative verbs imply the truth or untruth of their complement, depending on the polarity of the main predicate. (Polarity here is also called valence, which means positive or negative word, statement, proposition, tone, word choice)

3. Assertive verbs: are verbs whose complement clauses assert a proposition. The truth of the proposition is not presupposed, but its level of certainty depends on the asserting verb.

4. Hedges: used to reduce one’s commitment to the truth of a proposition, evading any bold predictions.

5. Strong subjective intensifiers: are adjectives or adverbs that add (subjective) force to the meaning of a phrase or proposition.

6. Degree Modifiers: are contextual cues (often adverbs such as extremely, or slightly) that modify the intensity or degree of an action, an adjective or another adverb.

Now just using those few markers, we could go through virtually any news media reporting and find bias that we likely never noticed before, probably passed over and weren't actually aware was affective of both our thinking and our mood, especially towards the reported subject or issue.

All of media will be found to be biased, even if that bias goes towards our particular political, social or cultural preferences, it's still bias. Because bias is affective of thinking processes, emotions and moods, it isn't making us better thinkers and problem solvers to allow it to happen.

So what do we do? Change the media? I'll wish you the best of luck with that but don't think you'll manage very well. What we need to do is build better tools for better thinking as the best defense against these problems.

The best part about Biopolitics and Bionews is that the channel has a foundation enforcing that strong ethics are required. Strong ethics is the foundation of creating one of our best tools: the rubric.

RUBRIC

Rubric: A rubric sets out clearly defined criteria and standards for assessing different levels of performance. Rubrics have often been used in education for grading student work, and in recent years have been applied in evaluation to make transparent the process of synthesising evidence into an overall evaluative judgement.

So we start with our ethical decision making and create a rubic built from our ethical values.

The great thing about a rubric that is built from ethical decision making is that it is easily modifiable to accommodate a myriad of different applications.

So, when we are reading a media news story, we can pull up our rubric if we keep it on our PC or MAC, or as I prefer to have it separate because it works better for me, I print it out. Then we can dissect and analyze any media coverage by comparing the statements, facts, commentary to our rubric and ask ourselves the question, "does this meet with my rubric?" Obviously, we can use this across media platforms, media outlets and even for those who are engaged through social media.

METRIC

For some, an extra step may be used, although it isn't absolutely necessary. We can also apply a metric to gauge how well any media coverage has done with regards to facts, accuracy, ethical decisions. The extra component added should be to check for and assess not only the presence of bias, but the actual level of bias.

There is an awesome application called VADER, which means Valence Aware Dictionary and sEntiment Reasoner. Sentiment analysis is becoming more and more important across domains, corporations are even using it to modify their application processes to achieve better outcomes on hiring targets.

Sentiment analysis means not just the measurable positivity or negativity of text but also of biased wording and even that found in text containing slang, emojis, similes, metaphors, etc. It is a whole tool language metric with a proven track record.[*see below] If not your style or too much technology involved, in the references you will find sufficient information for standard metrics used in bias and sentiment analysis.

So this is my toolbox for Defense against the Dark Arts of media bias, dark free speech and failures at missing bias in my own cognitive processing. If Germaine approves this post, hope to see comments and will help with inquiries in any way possible.

REFERENCES
https://www.betterevaluation.org/en/evaluation-options/rubrics
https://blogs.ei.columbia.edu/2013/12/02/sustainability-ethics-and-metrics/ http://oxfordre.com/communication/view/10.1093/acrefore/9780190228613.001.0001/acrefore-9780190228613-e-439
https://www.thinkmind.org/download.php?articleid=huso_2015_2_30_70077
https://research.ku.edu/sites/research.ku.edu/files/docs/EESE_EthicalDecisionmakingFramework.pdf
https://www.victoria.ac.nz/vbs/teaching/aol/rubrics-and-score-sheets/LO-4a-Rubric-for-Ethical-Perspectives.pdf
https://medium.com/analytics-vidhya/simplifying-social-media-sentiment-analysis-using-vader-in-python-f9e6ec6fc52f
[*downside] The only downside is VADER requires operation on a Linux distribution because it uses Python. For anyone interested, feel free to ask, and even if you are a long time Windows or MAC user, there is a Debian distribution called DeepIn that I am confident most would find very comfortable and easy to use. It is also the most beautiful OS I have ever seen.

B&B orig: 1/18/19

Deepfakes and the New Disinformation War: The Coming Age of Post-Truth Geopolitics

Author: PD The following article appeared in the Jan/Feb issue of Foreign Affairs. It is quite long, even in abridged form below. I included most of the article because it's a fairly detailed discussion of a very important development, and I thought some would want to read it in its entirety. As far as I know, the article is not open to non-subscribers. In case I'm wrong, here's the link. If the link works you can also listen to the article (there's an audio option). --PD


Stills of a deepfake video of Barack Obama created by researchers in 2017

________________________________________________________________________________

A picture may be worth a thousand words, but there is nothing that persuades quite like an audio or video recording of an event. At a time when partisans can barely agree on facts, such persuasiveness might seem as if it could bring a welcome clarity. Audio and video recordings allow people to become firsthand witnesses of an event, sparing them the need to decide whether to trust someone else’s account of it. And thanks to smartphones, which make it easy to capture audio and video content, and social media platforms, which allow that content to be shared and consumed, people today can rely on their own eyes and ears to an unprecedented degree.

Therein lies a great danger. Imagine a video depicting the Israeli prime minister in private conversation with a colleague, seemingly revealing a plan to carry out a series of political assassinations in Tehran. Or an audio clip of Iranian officials planning a covert operation to kill Sunni leaders in a particular province of Iraq. Or a video showing an American general in Afghanistan burning a Koran. In a world already primed for violence, such recordings would have a powerful potential for incitement. Now imagine that these recordings could be faked using tools available to almost anyone with a laptop and access to the Internet—and that the resulting fakes are so convincing that they are impossible to distinguish from the real thing.

Advances in digital technology could soon make this nightmare a reality. Thanks to the rise of “deepfakes”—highly realistic and difficult-to-detect digital manipulations of audio or video—it is becoming easier than ever to portray someone saying or doing something he or she never said or did. Worse, the means to create deepfakes are likely to proliferate quickly, producing an ever-widening circle of actors capable of deploying them for political purposes. Disinformation is an ancient art, of course, and one with a renewed relevance today. But as deepfake technology develops and spreads, the current disinformation wars may soon look like the propaganda equivalent of the era of swords and shields.

DAWN OF THE DEEPFAKES

Deepfakes are the product of recent advances in a form of artificial intelligence known as “deep learning,” in which sets of algorithms called “neural networks” learn to infer rules and replicate patterns by sifting through large data sets. (Google, for instance, has used this technique to develop powerful image-classification algorithms for its search engine.)Deepfakes emerge from a specific type of deep learning in which pairs of algorithms are pitted against each other in “generative adversarial networks,” or GANS. In a GAN, one algorithm, the “generator,” creates content modeled on source data (for instance, making artificial images of cats from a database of real cat pictures), while a second algorithm, the “discriminator,” tries to spot the artificial content (pick out the fake cat images). Since each algorithm is constantly training against the other, such pairings can lead to rapid improvement, allowing GANS to produce highly realistic yet fake audio and video content.

This technology has the potential to proliferate widely. Commercial and even free deepfake services have already appeared in the open market, and versions with alarmingly few safeguards are likely to emerge on the black market. The spread of these services will lower the barriers to entry, meaning that soon, the only practical constraint on one’s ability to produce a deepfake will be access to training materials—that is, audio and video of the person to be modeled—to feed the GAN. The capacity to create professional-grade forgeries will come within reach of nearly anyone with sufficient interest and the knowledge of where to go for help.Deepfakes have a number of worthy applications. Modified audio or video of a historical figure, for example, could be created for the purpose of educating children. One company even claims that it can use the technology to restore speech to individuals who have lost their voice to disease. But deepfakes can and will be used for darker purposes, as well. Users have already employed deepfake technology to insert people’s faces into pornography without their consent or knowledge, and the growing ease of making fake audio and video content will create ample opportunities for blackmail, intimidation, and sabotage. The most frightening applications of deepfake technology, however, may well be in the realms of politics and international affairs. There, deepfakes may be used to create unusually effective lies capable of inciting violence, discrediting leaders and institutions, or even tipping elections. Deepfakes have the potential to be especially destructive because they are arriving at a time when it already is becoming harder to separate fact from fiction. For much of the twentieth century, magazines, newspapers, and television broadcasters managed the flow of information to the public.

Journalists established rigorous professional standards to control the quality of news, and the relatively small number of mass media outlets meant that only a limited number of individuals and organizations could distribute information widely. Over the last decade, however, more and more people have begun to get their information from social media platforms, such as Facebook and Twitter, which depend on a vast array of users to generate relatively unfiltered content. Users tend to curate their experiences so that they mostly encounter perspectives they already agree with (a tendency heightened by the platforms’ algorithms), turning their social media feeds into echo chambers. These platforms are also susceptible to so-called information cascades, whereby people pass along information shared by others without bothering to check if it is true, making it appear more credible in the process. The end result is that falsehoods can spread faster than ever before. These dynamics will make social media fertile ground for circulating deepfakes, with potentially explosive implications for politics.

Russia’s attempt to influence the 2016 U.S. presidential election—spreading divisive and politically inflammatory messages on Facebook and Twitter—already demonstrated how easily disinformation can be injected into the social media bloodstream. The deepfakes of tomorrow will be more vivid and realistic and thus more shareable than the fake news of 2016. And because people are especially prone to sharing negative and novel information, the more salacious the deepfakes, the better.

DEMOCRATIZING FRAUD

The use of fraud, forgery, and other forms of deception to influence politics is nothing new, of course. When the USS Maine exploded in Havana Harbor in 1898, American tabloids used misleading accounts of the incident to incite the public toward war with Spain. The anti-Semitic tract Protocols of the Elders of Zion, which described a fictional Jewish conspiracy, circulated widely during the first half of the twentieth century. More recently, technologies such as Photoshop have made doctoring images as easy as forging text. What makes deepfakes unprecedented is their combination of quality, applicability to persuasive formats such as audio and video, and resistance to detection. And as deepfake technology spreads, an ever-increasing number of actors will be able to convincingly manipulate audio and video content in a way that once was restricted to Hollywood studios or the most well-funded intelligence agencies.

Deepfakes will be particularly useful to nonstate actors, such as insurgent groups and terrorist organizations, which have historically lacked the resources to make and disseminate fraudulent yet credible audio or video content. These groups will be able to depict their adversaries—including government officials—spouting inflammatory words or engaging in provocative actions, with the specific content carefully chosen to maximize the galvanizing impact on their target audiences. An affiliate of the Islamic State (or ISIS), for instance, could create a video depicting a U.S. soldier shooting civilians or discussing a plan to bomb a mosque, thereby aiding the terrorist group’s recruitment. Such videos will be especially difficult to debunk in cases where the target audience already distrusts the person shown in the deepfake. States can and no doubt will make parallel use of deepfakes to undermine their nonstate opponents.Deepfakes will also exacerbate the disinformation wars that increasingly disrupt domestic politics in the United States and elsewhere.

In 2016, Russia’s state-sponsored disinformation operations were remarkably successful in deepening existing social cleavages in the United States. To cite just one example, fake Russian accounts on social media claiming to be affiliated with the Black Lives Matter movement shared inflammatory content purposely designed to stoke racial tensions. Next time, instead of tweets and Facebook posts, such disinformation could come in the form of a fake video of a white police officer shouting racial slurs or a Black Lives Matter activist calling for violence.Perhaps the most acute threat associated with deepfakes is the possibility that a well-timed forgery could tip an election. In May 2017, Moscow attempted something along these lines. On the eve of the French election, Russian hackers tried to undermine the presidential campaign of Emmanuel Macron by releasing a cache of stolen documents, many of them doctored. That effort failed for a number of reasons, including the relatively boring nature of the documents and the effects of a French media law that prohibits election coverage in the 44 hours immediately before a vote. But in most countries, most of the time, there is no media blackout, and the nature of deepfakes means that damaging content can be guaranteed to be salacious or worse. A convincing video in which Macron appeared to admit to corruption, released on social media only 24 hours before the election, could have spread like wildfire and proved impossible to debunk in time.

Deepfakes may also erode democracy in other, less direct ways. The problem is not just that deepfakes can be used to stoke social and ideological divisions. They can create a “liar’s dividend”: as people become more aware of the existence of deepfakes, public figures caught in genuine recordings of misbehavior will find it easier to cast doubt on the evidence against them. (If deepfakes were prevalent during the 2016 U.S. presidential election, imagine how much easier it would have been for Donald Trump to have disputed the authenticity of the infamous audiotape in which he brags about groping women.) More broadly, as the public becomes sensitized to the threat of deepfakes, it may become less inclined to trust news in general.And journalists, for their part, may become more wary about relying on,let alone publishing, audio or video of fast-breaking events for fear that the evidence will turn out to have been faked.

DEEP FIX

There is no silver bullet for countering deepfakes. There are several legal and technological approaches—some already existing, others likely to emerge—that can help mitigate the threat. But none will overcome the problem altogether. Instead of full solutions, the rise of deepfakes calls for resilience.

Three technological approaches deserve special attention. The first relates to forensic technology, or the detection of forgeries through technical means. Just as researchers are putting a great deal of time and effort into creating credible fakes, so, too, are they developing methods of enhanced detection. In June 2018, computer scientists at Dartmouth and the University at Albany, SUNY, announced that they had created a program that detects deepfakes by looking for abnormal patterns of eyelid movement when the subject of a video blinks. In the deepfakes arms race, however, such advances serve only to inform the next wave of innovation. In the future, GANS will be fed training videos that include examples of normal blinking. And even if extremely capable detection algorithms emerge, the speed with which deepfakes can circulate on social media will make debunking them an uphill battle. By the time the forensic alarm bell rings, the damage may already be done.

A second technological remedy involves authenticating content before it ever spreads—an approach sometimes referred to as a “digital provenance” solution. Companies such as Truepic are developing ways to digitally watermark audio, photo, and video content at the moment of its creation, using metadata that can be logged immutably on a distributed ledger, or blockchain. In other words, one could effectively stamp content with a record of authenticity that could be used later as a reference to compare to suspected fakes....

If these technological fixes have limited upsides, what about legal remedies? Depending on the circumstances, making or sharing a deepfake could constitute defamation, fraud, or misappropriation of a person’s likeness, among other civil and criminal violations. In theory, one could close any remaining gaps by criminalizing (or attaching civil liability to) specific acts—for instance, creating a deepfake of a real person with the intent to deceive a viewer or listener and with the expectation that this deception would cause some specific kind of harm. But it could be hard to make these claims or charges stick in practice. To begin with, it will likely prove very difficult to attribute the creation of a deepfake to a particular person or group. And even if perpetrators are identified, they may be beyond a court’s reach, as in the case of foreign individuals or governments....

In the meantime, democratic societies will have to learn resilience. On the one hand, this will mean accepting that audio and video content cannot be taken at face value; on the other, it will mean fighting the descent into a post-truth world, in which citizens retreat to their private information bubbles and regard as fact only that which flatters their own beliefs. In short, democracies will have to accept an uncomfortable truth: in order to survive the threat of deepfakes, they are going to have to learn how to live with lies.



B&B orig: 1/19/19

Fact-checking Technology Inches Forward

“But it cannot be the duty, because it is not the right, of the state to protect the public against false doctrine. The very purpose of the First Amendment is to foreclose public authority from assuming a guardianship of the public mind through regulating the press, speech, and religion. In this field, every person must be his own watchman for truth, because the forefathers did not trust any government to separate the true from the false for us.” U.S. Supreme Court in Thomas v. Collins, 323 U.S. 516, 545 (1945)

Researchers at Duke University are developing technology for near real-time TV political fact checking. Phys⚛️org writes:
A Duke University team expects to have a product available for election year that will allow television networks to offer real-time fact checks onscreen when a politician makes a questionable claim during a speech or debate.

The mystery is whether any network will choose to use it.

The response to President Donald Trump's Jan. 8 speech on border security illustrated how fact-checking is likely to be an issue over the next two years. Networks briefly considered not airing Trump live and several analysts contested some of his statements afterward, but nobody questioned him while he was speaking.

Duke already offers an app, developed by professor and Politifact founder Bill Adair, that directs users to online fact checks during political events. A similar product has been tested for television, but is still not complete.

The TV product would call on a database of research from Politifact, Factcheck.org and The Washington Post to point out false or misleading statements onscreen. For instance, Trump's statement that 90 percent of the heroin that kills 300 Americans each week comes through the southern border would likely trigger an onscreen explanation that much of the drugs were smuggled through legal points of entry and wouldn't be affected by a wall.

The Duke Tech & Check Cooperative conducted a focus group test in October, showing viewers portions of State of the Union speeches by Trump and predecessor Barack Obama with fact checks inserted. It was a big hit, Adair said.

"People really want onscreen fact checks," he said. "There is a strong market for this and I think the TV networks will realize there's a brand advantage to it."

If that's the case, the networks aren't letting on. None of the broadcast or cable news divisions would discuss Duke's product when contacted by The Associated Press, or their own philosophies on fact checking.

Network executives are likely to tread very carefully, both because of technical concerns about how it would work, the risk of getting something wrong or the suspicion that some viewers might consider the messages a political attack.

"It's an incredibly difficult challenge," said Mark Lukasiewicz, longtime NBC News executive who recently became dean of Hofstra University's communications school.

This shows the complexity of trying to implement defenses against dark free speech (lies, deceit, deepfakes, unwarranted opacity, unwarranted emotional manipulation, etc.) in America. With a few exceptions such as defamation, false advertising and child porn, American law recognizes lies and deceit as deserving of as much protection as honest speech.

America needs to somehow harden its defenses against dark free speech without enabling authoritarians and liars to use it as a weapon against opposition or the public interest. It is going to be an extremely difficult fight, assuming it is possible to make significant headway. Maybe it is time for professional broadcast news outlets to not do real-time broadcasting of politicians' speeches and rhetoric. After the fact fact-checking is far less effective than real-time fact-checking. And, maybe it is time to begin a long fight to re-establish the old, now illegal, fairness doctrine as a partial antidote to dark free speech.



B&B orig: 1/20/19