Pragmatic politics focused on the public interest for those uncomfortable with America's two-party system and its way of doing politics. Considering the interface of politics with psychology, cognitive science, social behavior, morality and history.
Etiquette
DP Etiquette
First rule: Don't be a jackass.
Other rules: Do not attack or insult people you disagree with. Engage with facts, logic and beliefs. Out of respect for others, please provide some sources for the facts and truths you rely on if you are asked for that. If emotion is getting out of hand, get it back in hand. To limit dehumanizing people, don't call people or whole groups of people disrespectful names, e.g., stupid, dumb or liar. Insulting people is counterproductive to rational discussion. Insult makes people angry and defensive. All points of view are welcome, right, center, left and elsewhere. Just disagree, but don't be belligerent or reject inconvenient facts, truths or defensible reasoning.
Snow Leopard, endangered species of uncontrolled poaching
Our brother Germaine has asked me to "try" to put out a post on evaluating media reporting with a systematic process. Mind you, I'm wordy, in case none have noticed, and maybe a bit towards the esoteric lean with my writing style and word choices. He promised to beat me with multiple implements of cooked pasta, delete all my posts and ban my account if I didn't bring it down a notch, so I'll try. References will be at the end, for those interested.
BIAS
Seeing as we are talking about media and the language they use, I'll go with a definition of writer's bias:
Bias occurs when a writer displays a partiality for or prejudice against someone, something, or some idea. Sometimes biases are readily identifiable in direct statements. Other times a writer's choice of words, selection of facts or examples, or tone of voice reveals his or her biases.
That's not bad, but I've been really looking into language bias for about the last year solidly and what becomes apparent is that it is quite a bit more involved. We generally tend to look at bias through the socio-cultural prisms of offensive or vilifying statements, propositions or words that directly indicate an attack of a person or social grouping by race, sex, gender identification, religion, etc.
That's the right in your face kind, but what if there is more? Much more, and it is sublime because even though we don't get the conscious "bump - hey that's not right", there is a non-conscious affective outcome that we very often feel as specific emotion towards, but are very unlikely to be able to point to a cause. Here's a short list of what I would term "occult language modifiers":
1. Factive verbs: are verbs that presuppose the truth of
their complement clause.
2. Implicative verbs: : implicative verbs imply the truth or untruth of their complement, depending on the polarity of the main predicate. (Polarity here is also called valence, which means positive or negative word, statement, proposition, tone, word choice)
3. Assertive verbs: are verbs whose complement clauses assert a proposition. The truth of the proposition is not presupposed, but its level of certainty depends on the asserting verb.
4. Hedges: used to reduce one’s commitment to the truth of a proposition, evading any bold predictions.
5. Strong subjective intensifiers: are adjectives or adverbs that add (subjective) force to the meaning of a phrase or proposition.
6. Degree Modifiers: are contextual cues (often adverbs such as extremely, or slightly) that modify the intensity or degree of an action, an adjective or another adverb.
Now just using those few markers, we could go through virtually any news media reporting and find bias that we likely never noticed before, probably passed over and weren't actually aware was affective of both our thinking and our mood, especially towards the reported subject or issue.
All of media will be found to be biased, even if that bias goes towards our particular political, social or cultural preferences, it's still bias. Because bias is affective of thinking processes, emotions and moods, it isn't making us better thinkers and problem solvers to allow it to happen.
So what do we do? Change the media? I'll wish you the best of luck with that but don't think you'll manage very well. What we need to do is build better tools for better thinking as the best defense against these problems.
The best part about Biopolitics and Bionews is that the channel has a foundation enforcing that strong ethics are required. Strong ethics is the foundation of creating one of our best tools: the rubric.
RUBRIC
Rubric: A rubric sets out clearly defined criteria and standards for assessing different levels of performance. Rubrics have often been used in education for grading student work, and in recent years have been applied in evaluation to make transparent the process of synthesising evidence into an overall evaluative judgement.
The great thing about a rubric that is built from ethical decision making is that it is easily modifiable to accommodate a myriad of different applications.
So, when we are reading a media news story, we can pull up our rubric if we keep it on our PC or MAC, or as I prefer to have it separate because it works better for me, I print it out. Then we can dissect and analyze any media coverage by comparing the statements, facts, commentary to our rubric and ask ourselves the question, "does this meet with my rubric?" Obviously, we can use this across media platforms, media outlets and even for those who are engaged through social media.
METRIC
For some, an extra step may be used, although it isn't absolutely necessary. We can also apply a metric to gauge how well any media coverage has done with regards to facts, accuracy, ethical decisions. The extra component added should be to check for and assess not only the presence of bias, but the actual level of bias.
There is an awesome application called VADER, which means Valence Aware Dictionary and sEntiment Reasoner. Sentiment analysis is becoming more and more important across domains, corporations are even using it to modify their application processes to achieve better outcomes on hiring targets.
Sentiment analysis means not just the measurable positivity or negativity of text but also of biased wording and even that found in text containing slang, emojis, similes, metaphors, etc. It is a whole tool language metric with a proven track record.[*see below] If not your style or too much technology involved, in the references you will find sufficient information for standard metrics used in bias and sentiment analysis.
So this is my toolbox for Defense against the Dark Arts of media bias, dark free speech and failures at missing bias in my own cognitive processing. If Germaine approves this post, hope to see comments and will help with inquiries in any way possible.
REFERENCES
https://www.betterevaluation.org/en/evaluation-options/rubrics
https://blogs.ei.columbia.edu/2013/12/02/sustainability-ethics-and-metrics/
http://oxfordre.com/communication/view/10.1093/acrefore/9780190228613.001.0001/acrefore-9780190228613-e-439
https://www.thinkmind.org/download.php?articleid=huso_2015_2_30_70077
https://research.ku.edu/sites/research.ku.edu/files/docs/EESE_EthicalDecisionmakingFramework.pdf
https://www.victoria.ac.nz/vbs/teaching/aol/rubrics-and-score-sheets/LO-4a-Rubric-for-Ethical-Perspectives.pdf
https://medium.com/analytics-vidhya/simplifying-social-media-sentiment-analysis-using-vader-in-python-f9e6ec6fc52f [*downside] The only downside is VADER requires operation on a Linux distribution because it uses Python. For anyone interested, feel free to ask, and even if you are a long time Windows or MAC user, there is a Debian distribution called DeepIn that I am confident most would find very comfortable and easy to use. It is also the most beautiful OS I have ever seen.
Author: PD
The following article appeared in the Jan/Feb issue of Foreign Affairs. It is quite long, even in abridged form below. I included most of the article because it's a fairly detailed discussion of a very important development, and I thought some would want to read it in its entirety. As far as I know, the article is not open to non-subscribers. In case I'm wrong, here's the link. If the link works you can also listen to the article (there's an audio option). --PD
Stills of a deepfake video of Barack Obama created by researchers in 2017
A picture may be worth a thousand words, but there is nothing that persuades quite like an audio or video recording of an event. At a time when partisans can barely agree on facts, such persuasiveness might seem as if it could bring a welcome clarity. Audio and video recordings allow people to become firsthand witnesses of an event, sparing them the need to decide whether to trust someone else’s account of it. And thanks to smartphones, which make it easy to capture audio and video content, and social media platforms, which allow that content to be shared and consumed, people today can rely on their own eyes and ears to an unprecedented degree.
Therein lies a great danger. Imagine a video depicting the Israeli prime minister in private conversation with a colleague, seemingly revealing a plan to carry out a series of political assassinations in Tehran. Or an audio clip of Iranian officials planning a covert operation to kill Sunni leaders in a particular province of Iraq. Or a video showing an American general in Afghanistan burning a Koran. In a world already primed for violence, such recordings would have a powerful potential for incitement. Now imagine that these recordings could be faked using tools available to almost anyone with a laptop and access to the Internet—and that the resulting fakes are so convincing that they are impossible to distinguish from the real thing.
Advances in digital technology could soon make this nightmare a reality. Thanks to the rise of “deepfakes”—highly realistic and difficult-to-detect digital manipulations of audio or video—it is becoming easier than ever to portray someone saying or doing something he or she never said or did. Worse, the means to create deepfakes are likely to proliferate quickly, producing an ever-widening circle of actors capable of deploying them for political purposes. Disinformation is an ancient art, of course, and one with a renewed relevance today. But as deepfake technology develops and spreads, the current disinformation wars may soon look like the propaganda equivalent of the era of swords and shields.
DAWN OF THE DEEPFAKES
Deepfakes are the product of recent advances in a form of artificial intelligence known as “deep learning,” in which sets of algorithms called “neural networks” learn to infer rules and replicate patterns by sifting through large data sets. (Google, for instance, has used this technique to develop powerful image-classification algorithms for its search engine.)Deepfakes emerge from a specific type of deep learning in which pairs of algorithms are pitted against each other in “generative adversarial networks,” or GANS. In a GAN, one algorithm, the “generator,” creates content modeled on source data (for instance, making artificial images of cats from a database of real cat pictures), while a second algorithm, the “discriminator,” tries to spot the artificial content (pick out the fake cat images). Since each algorithm is constantly training against the other, such pairings can lead to rapid improvement, allowing GANS to produce highly realistic yet fake audio and video content.
This technology has the potential to proliferate widely. Commercial and even free deepfake services have already appeared in the open market, and versions with alarmingly few safeguards are likely to emerge on the black market. The spread of these services will lower the barriers to entry, meaning that soon, the only practical constraint on one’s ability to produce a deepfake will be access to training materials—that is, audio and video of the person to be modeled—to feed the GAN. The capacity to create professional-grade forgeries will come within reach of nearly anyone with sufficient interest and the knowledge of where to go for help.Deepfakes have a number of worthy applications. Modified audio or video of a historical figure, for example, could be created for the purpose of educating children. One company even claims that it can use the technology to restore speech to individuals who have lost their voice to disease. But deepfakes can and will be used for darker purposes, as well. Users have already employed deepfake technology to insert people’s faces into pornography without their consent or knowledge, and the growing ease of making fake audio and video content will create ample opportunities for blackmail, intimidation, and sabotage. The most frightening applications of deepfake technology, however, may well be in the realms of politics and international affairs. There, deepfakes may be used to create unusually effective lies capable of inciting violence, discrediting leaders and institutions, or even tipping elections. Deepfakes have the potential to be especially destructive because they are arriving at a time when it already is becoming harder to separate fact from fiction. For much of the twentieth century, magazines, newspapers, and television broadcasters managed the flow of information to the public.
Journalists established rigorous professional standards to control the quality of news, and the relatively small number of mass media outlets meant that only a limited number of individuals and organizations could distribute information widely. Over the last decade, however, more and more people have begun to get their information from social media platforms, such as Facebook and Twitter, which depend on a vast array of users to generate relatively unfiltered content. Users tend to curate their experiences so that they mostly encounter perspectives they already agree with (a tendency heightened by the platforms’ algorithms), turning their social media feeds into echo chambers. These platforms are also susceptible to so-called information cascades, whereby people pass along information shared by others without bothering to check if it is true, making it appear more credible in the process. The end result is that falsehoods can spread faster than ever before. These dynamics will make social media fertile ground for circulating deepfakes, with potentially explosive implications for politics.
Russia’s attempt to influence the 2016 U.S. presidential election—spreading divisive and politically inflammatory messages on Facebook and Twitter—already demonstrated how easily disinformation can be injected into the social media bloodstream. The deepfakes of tomorrow will be more vivid and realistic and thus more shareable than the fake news of 2016. And because people are especially prone to sharing negative and novel information, the more salacious the deepfakes, the better.
DEMOCRATIZING FRAUD
The use of fraud, forgery, and other forms of deception to influence politics is nothing new, of course. When the USS Maine exploded in Havana Harbor in 1898, American tabloids used misleading accounts of the incident to incite the public toward war with Spain. The anti-Semitic tract Protocols of the Elders of Zion, which described a fictional Jewish conspiracy, circulated widely during the first half of the twentieth century. More recently, technologies such as Photoshop have made doctoring images as easy as forging text. What makes deepfakes unprecedented is their combination of quality, applicability to persuasive formats such as audio and video, and resistance to detection. And as deepfake technology spreads, an ever-increasing number of actors will be able to convincingly manipulate audio and video content in a way that once was restricted to Hollywood studios or the most well-funded intelligence agencies.
Deepfakes will be particularly useful to nonstate actors, such as insurgent groups and terrorist organizations, which have historically lacked the resources to make and disseminate fraudulent yet credible audio or video content. These groups will be able to depict their adversaries—including government officials—spouting inflammatory words or engaging in provocative actions, with the specific content carefully chosen to maximize the galvanizing impact on their target audiences. An affiliate of the Islamic State (or ISIS), for instance, could create a video depicting a U.S. soldier shooting civilians or discussing a plan to bomb a mosque, thereby aiding the terrorist group’s recruitment. Such videos will be especially difficult to debunk in cases where the target audience already distrusts the person shown in the deepfake. States can and no doubt will make parallel use of deepfakes to undermine their nonstate opponents.Deepfakes will also exacerbate the disinformation wars that increasingly disrupt domestic politics in the United States and elsewhere.
In 2016, Russia’s state-sponsored disinformation operations were remarkably successful in deepening existing social cleavages in the United States. To cite just one example, fake Russian accounts on social media claiming to be affiliated with the Black Lives Matter movement shared inflammatory content purposely designed to stoke racial tensions. Next time, instead of tweets and Facebook posts, such disinformation could come in the form of a fake video of a white police officer shouting racial slurs or a Black Lives Matter activist calling for violence.Perhaps the most acute threat associated with deepfakes is the possibility that a well-timed forgery could tip an election. In May 2017, Moscow attempted something along these lines. On the eve of the French election, Russian hackers tried to undermine the presidential campaign of Emmanuel Macron by releasing a cache of stolen documents, many of them doctored. That effort failed for a number of reasons, including the relatively boring nature of the documents and the effects of a French media law that prohibits election coverage in the 44 hours immediately before a vote. But in most countries, most of the time, there is no media blackout, and the nature of deepfakes means that damaging content can be guaranteed to be salacious or worse. A convincing video in which Macron appeared to admit to corruption, released on social media only 24 hours before the election, could have spread like wildfire and proved impossible to debunk in time.
Deepfakes may also erode democracy in other, less direct ways. The problem is not just that deepfakes can be used to stoke social and ideological divisions. They can create a “liar’s dividend”: as people become more aware of the existence of deepfakes, public figures caught in genuine recordings of misbehavior will find it easier to cast doubt on the evidence against them. (If deepfakes were prevalent during the 2016 U.S. presidential election, imagine how much easier it would have been for Donald Trump to have disputed the authenticity of the infamous audiotape in which he brags about groping women.) More broadly, as the public becomes sensitized to the threat of deepfakes, it may become less inclined to trust news in general.And journalists, for their part, may become more wary about relying on,let alone publishing, audio or video of fast-breaking events for fear that the evidence will turn out to have been faked.
DEEP FIX
There is no silver bullet for countering deepfakes. There are several legal and technological approaches—some already existing, others likely to emerge—that can help mitigate the threat. But none will overcome the problem altogether. Instead of full solutions, the rise of deepfakes calls for resilience.
Three technological approaches deserve special attention. The first relates to forensic technology, or the detection of forgeries through technical means. Just as researchers are putting a great deal of time and effort into creating credible fakes, so, too, are they developing methods of enhanced detection. In June 2018, computer scientists at Dartmouth and the University at Albany, SUNY, announced that they had created a program that detects deepfakes by looking for abnormal patterns of eyelid movement when the subject of a video blinks. In the deepfakes arms race, however, such advances serve only to inform the next wave of innovation. In the future, GANS will be fed training videos that include examples of normal blinking. And even if extremely capable detection algorithms emerge, the speed with which deepfakes can circulate on social media will make debunking them an uphill battle. By the time the forensic alarm bell rings, the damage may already be done.
A second technological remedy involves authenticating content before it ever spreads—an approach sometimes referred to as a “digital provenance” solution. Companies such as Truepic are developing ways to digitally watermark audio, photo, and video content at the moment of its creation, using metadata that can be logged immutably on a distributed ledger, or blockchain. In other words, one could effectively stamp content with a record of authenticity that could be used later as a reference to compare to suspected fakes....
If these technological fixes have limited upsides, what about legal remedies? Depending on the circumstances, making or sharing a deepfake could constitute defamation, fraud, or misappropriation of a person’s likeness, among other civil and criminal violations. In theory, one could close any remaining gaps by criminalizing (or attaching civil liability to) specific acts—for instance, creating a deepfake of a real person with the intent to deceive a viewer or listener and with the expectation that this deception would cause some specific kind of harm. But it could be hard to make these claims or charges stick in practice. To begin with, it will likely prove very difficult to attribute the creation of a deepfake to a particular person or group. And even if perpetrators are identified, they may be beyond a court’s reach, as in the case of foreign individuals or governments....
In the meantime, democratic societies will have to learn resilience. On the one hand, this will mean accepting that audio and video content cannot be taken at face value; on the other, it will mean fighting the descent into a post-truth world, in which citizens retreat to their private information bubbles and regard as fact only that which flatters their own beliefs. In short, democracies will have to accept an uncomfortable truth: in order to survive the threat of deepfakes, they are going to have to learn how to live with lies.
“But it cannot be the duty, because it is not the right, of the state to protect the public against false doctrine. The very purpose of the First Amendment is to foreclose public authority from assuming a guardianship of the public mind through regulating the press, speech, and religion. In this field, every person must be his own watchman for truth, because the forefathers did not trust any government to separate the true from the false for us.” U.S. Supreme Court in Thomas v. Collins, 323 U.S. 516, 545 (1945)
Researchers at Duke University are developing technology for near real-time TV political fact checking. Phys⚛️org writes:
A Duke University team expects to have a product available for election year that will allow television networks to offer real-time fact checks onscreen when a politician makes a questionable claim during a speech or debate.
The mystery is whether any network will choose to use it.
The response to President Donald Trump's Jan. 8 speech on border security illustrated how fact-checking is likely to be an issue over the next two years. Networks briefly considered not airing Trump live and several analysts contested some of his statements afterward, but nobody questioned him while he was speaking.
Duke already offers an app, developed by professor and Politifact founder Bill Adair, that directs users to online fact checks during political events. A similar product has been tested for television, but is still not complete.
The TV product would call on a database of research from Politifact, Factcheck.org and The Washington Post to point out false or misleading statements onscreen. For instance, Trump's statement that 90 percent of the heroin that kills 300 Americans each week comes through the southern border would likely trigger an onscreen explanation that much of the drugs were smuggled through legal points of entry and wouldn't be affected by a wall.
The Duke Tech & Check Cooperative conducted a focus group test in October, showing viewers portions of State of the Union speeches by Trump and predecessor Barack Obama with fact checks inserted. It was a big hit, Adair said.
"People really want onscreen fact checks," he said. "There is a strong market for this and I think the TV networks will realize there's a brand advantage to it."
If that's the case, the networks aren't letting on. None of the broadcast or cable news divisions would discuss Duke's product when contacted by The Associated Press, or their own philosophies on fact checking.
Network executives are likely to tread very carefully, both because of technical concerns about how it would work, the risk of getting something wrong or the suspicion that some viewers might consider the messages a political attack.
"It's an incredibly difficult challenge," said Mark Lukasiewicz, longtime NBC News executive who recently became dean of Hofstra University's communications school.
This shows the complexity of trying to implement defenses against dark free speech (lies, deceit, deepfakes, unwarranted opacity, unwarranted emotional manipulation, etc.) in America. With a few exceptions such as defamation, false advertising and child porn, American law recognizes lies and deceit as deserving of as much protection as honest speech.
America needs to somehow harden its defenses against dark free speech without enabling authoritarians and liars to use it as a weapon against opposition or the public interest. It is going to be an extremely difficult fight, assuming it is possible to make significant headway. Maybe it is time for professional broadcast news outlets to not do real-time broadcasting of politicians' speeches and rhetoric. After the fact fact-checking is far less effective than real-time fact-checking. And, maybe it is time to begin a long fight to re-establish the old, now illegal, fairness doctrine as a partial antidote to dark free speech.
A recent NPR broadcast segment produced by the This American Life program focused on an allegation of sexual misconduct by a woman against her anaesthesiologist while she was in labor.
Her allegations were not only not believed by anyone, but she was lied to by the police detective assigned to her case. He never took her allegation seriously and falsely claimed he was doing all sorts of things to advance her case, when in fact he did nothing beyond talking to her from time to time.
The 10 minute broadcast segment: https://www.thisamericanlife.org/669/scrambling-to-get-off-the-ice/act-two-2
The transcript: https://www.thisamericanlife.org/669/transcript (starts at Act Two: Going Under)
The segment is about people in difficult situations who are trying to move and fix things. For a while they are running in place, trying one tactic after another, hoping something will work.
When Jessica Hopper was inappropriately groped by an anesthesiologist, during labor, she tries to out him that same day, to a roomful of hospital staff who don’t believe her. That sets her on a many year mission to get someone to take up her cause. She exhausts herself trying. And then finds out that at least one person had heard her – someone she hadn’t reached out to on her own.
On March 1, 2012, I was in the hospital delivering my son. And an anesthesiologist repeatedly groped me while administering my epidural. I've told the story of what he did to me again and again, dozens of times over the last seven years to the hospital, the police, the detectives, my attorney, the state medical licensing investigator, my victim advocate, a judge, a reporter, another detective, and eventually people close to me.
I pinpoint that moment as when I know something was off, how I read his name tag and took note of his name, where he stood, where the light was in the room, approximately what time of day. I detail how he put his hands on my breasts, cupped and held them. I describe the touch as sexual and not clinical. I describe how he was silent when I asked, what are you doing? And how he did it again. It did not stop until I said, what the [BLEEP] are you doing? I explain how he did not look at me and just left the room, how I told my husband immediately after he came back in the room.
About an hour and a half later, within a minute or two of delivering my son, I told the entire room what the doctor had done to me. But how I told them came out sarcastic and nervous, almost like a joke, saying that I was so happy to have an epidural that I almost didn't mind that the doctor had felt me up. I knew they heard me because the neonatal nurses weighing my son froze, and one locked eyes with me. My midwife told me, don't say that. That didn't happen. Don't say that.
The state's attorney declined to take her case due to lack of evidence. 'He said, she said' cases were impossible to prosecute. One woman's allegations alone are insufficient. Jessica understood there was almost no chance of getting the doctor to face consequences. After the incident, Jessica stopped going to doctors and dentists because she did not want to be touched. Finally she gave up: "On Valentine's Day 2015, feeling deeply discouraged, I told my lawyer to drop my case. I didn't talk about it or tell friends or family because I just wanted this all to go away. I tried to forget, but I couldn't."
The situation changed for Jessica some years later only after another woman came forward and made the same allegations against the same doctor.
He [an investigator] asked me if I would be willing to testify against the doctor. I said yes. Finally, I was being believed because there were two of us. I got off the phone and involuntarily screamed over and over before collapsing on the floor, sobbing. I was furious there were now two of us. I was elated there were now two of us. We were not in this alone.
What I'd learned about the woman who'd come forward was that she was undocumented, a single mom. She did not speak much English, and she was testifying. I felt overcome with love and gratitude for her, this brave woman I didn't know, this woman who is taking a risk coming forward.
Jessica's doctor had his medical license suspended for a minimum of three years, and he was fined $15,000 for what he did to the other woman. He was not disciplined for anything relating to Jessica because there wasn't enough proof for her claims.
Al Franken - Was he treated fairly?
What about Brett Kavanaugh? What about politics?: Remember the sex misconduct allegations against Brett Kavanaugh? At least two credible women alleged sexual misconduct, but of different kinds. The conservative tribe (republicans, populists, Evangelical Christians, etc.) mostly rejected the claims of both women, sometimes as lies, sometimes as confusion, sometimes as insufficient. The FBI did not do a thorough investigation so there was no serious effort to collect available evidence. The liberal tribe mostly accepted the the claims of both women.
What that says about politics, its morals and how it works is simple: Men in the conservative political tribe can get away with stuff, but men in the liberal political tribe could have a harder time pulling it off. In broader society, two allegations are probably more likely to be taken seriously and investigated seriously. In politics they may not be taken or investigated seriously. It depends on the tribe you are in.
Brett Kavanaugh - Were his accusers treated fairly?
B&B orig: 3/5/19
Physicists have a gold standard for the burden of proof known as five sigma. A new analysis shows that even the most conservative climate data have passed this point.
Scientists released the analysis of satellite temperature data in Nature Climate Change to mark 40 years of observations. Until satellites came about in the 1970s, scientists had to rely on weather balloons that provided sparse, imperfect data on what was going on in the atmosphere. The satellites were transformative, allowing scientists to improve models, validate past theories about climate change, and generally showing that human ingenuity is astounding. We can measure temperature from space!
To commemorate this achievement, the new study looked at three major satellite temperature datasets which were created using slightly different methods. They then used an analysis to tease out the signal of global warming from the noise of background variability, and charted its significance. For two of the datasets, they found the global warming signal emerged at the five sigma level in just 27 years. But by 40 years, even the most conservative dataset kept at the University of Alabama, Huntsville cleared the five sigma threshold. What that means in everyday language is that there’s a roughly 1-in-3.5 million chance that the warming we’ve seen is due to random chance.
“Five sigma is a big deal for physicists,” Ben Santer, the lead author of the study and climate scientist at Lawrence Livermore National Laboratory, told Earther. “It is the gold standard for discovery in physics. When the announcement was made for Higgs boson, that was the big deal and the detection of particle was at a sigma threshold. Crossing a five sigma ain’t no minor warming.”
The new paper puts a nail in the coffin of the tired satellite data argument. Which isn’t to say jabronis like Cruz will stop using it, but now it’s clear to the five sigma value just how bad faith the talking point is.
Reuters reports that a new analysis of temperature data has increased the level of certainty that humans are the main cause of global warming.
Evidence for man-made global warming has reached a “gold standard” level of certainty, adding pressure for cuts in greenhouse gases to limit rising temperatures, scientists said on Monday.
They said confidence that human activities were raising the heat at the Earth’s surface had reached a “five-sigma” level, a statistical gauge meaning there is only a one-in-a-million chance that the signal would appear if there was no warming.
Such a “gold standard” was applied in 2012, for instance, to confirm the discovery of the Higgs boson subatomic particle, a basic building block of the universe.
Benjamin Santer, lead author of Monday’s study at the Lawrence Livermore National Laboratory in California, said he hoped the findings would win over skeptics and spur action.
The confidence one can have that skeptics will be won over by this finding is a ten-sigma thing, about 1 in a gazillion chance. Dr. Santer, good soul that he is, needs to get his brains checked for anomalies, radiation leaks, photon eruptions, and other medical stuff. This will be used by climate change skeptics, a/k/a/ Russian Roulette players, as more evidence of a deep state conspiracy running false flag operations to destroy America and all that is good and decent.
Yesterday, NPR broadcast a deeply disturbing segment on how things are going to work or fail to work in congress. The segment is about a recent hearing in the House of Representatives. The hyper-partisanship, rage and mutual hate on display here is no less bitter than what Americans got to see in the Michael Cohen hearing last week. There is no obvious reason to think that this situation will change any time in the foreseeable future. The democratic-republican divide prevents any cooperation whatever, at least on 'political' matters such as investigating Trump. It is fair to see this situation as a test of the robustness of liberal democracy with our republican form of government versus an encroaching authoritarianism that envisions a different form of government.
The segment was produced by the This American Life program that NPR airs. The 34 minute segment, New Sheriffs in Town, described the planning and execution of the House Judiciary Committee's first public hearing on activities related to President Trump. Their first witness was intended to be acting Attorney General Matthew Whitaker.
The segment is here: https://www.thisamericanlife.org/669/scrambling-to-get-off-the-ice/act-one-2
The transcript is here: https://www.thisamericanlife.org/669/transcript
The open hearing was planned for weeks. Whitaker was told far in advance of questions the committee wanted to raise with him. The democrats wanted the hearing to show Americans what they think was going on. They believe that Whitaker was a hatchet man to protect Trump from Special Counsel Mueller's investigation, which potentially amounts to obstruction of justice. On the other hand, republicans on the committee claimed the hearing was not warranted and they wanted to hear from people they hate including Deputy Attorney General Rod Rosenstein. Republicans believe Rosenstein is pny out to get Trump for no reason other than pure partisanship.
The last two years of republican inquiries been focused mainly on investigating alleged wrongdoing by pro-democratic officials in the FBI and the Department of Justice. Their inquiry into activities by Trump, his campaign and Russia were insincere at best and almost non-existent at worst.
Republicans opposed and delayed the meeting as much as they possibly could. The republican strategy was to discredit the Hearing and witness testimony so that half the country will believe it is a partisan sham and of little or no importance regardless of the evidence. The Department of Justice also acted to neuter or completely derail the hearing as much as possible. Shortly before the hearing, the DoJ told the House committee that Whitaker would not testify unless democrats pledges in writing not to issue a subpoena if they believed that Whitaker was refusing to answer questions during the hearing. That nearly caused the hearing to collapse.
Democrats caved in and made the pledge. They were desperate to get Whitaker's testimony before the American people. Republicans in the House and the DoJ were desperate to block Whitaker's testimony before the American people, and failing that, to limit it as much as possible to irrelevant fluff.
Cooperators in the House over time
Advantage republicans: Judiciary Committee chairman Jerry Nadler's strategy in questioning Whitaker was to raise questions with yes or no answers in an attempt to get at key points and to be clear about it. However, with no threat of a subpoena to force answers out of him, Whitaker's strategy was brilliant and as effective as it possibly could be under the circumstances.
The republicans won.
Here is how Whitaker did it. Instead of giving yes or no answers to questions, Whitaker deflected, obfuscated and spent as much precious time as possible not saying yes or no. Each committee member had only 5 minutes. That is so little time as to be ridiculous for a heraing like this with a hostile witness, but that the stupid rule the House pretends to do its job under.
Here is the relevant transcript:
Narrator Zoe Chace: Everyone's vote on adjournment is painstakingly read out loud. That's four minutes less hearing, classic minority party stalling tactic. Also literally, Collins [republican ranking member] doesn't think we should be having this hearing. Whitaker opening statement, Nadler moves to questions. Right off the bat, Nadler's tactic of yes or no, Mr. Whitaker, gets mixed results.
Jerry Nadler: Well, it's our understanding that at least one briefing occurred in December before your decision not to recuse yourself on December 19 and Christmas Day. Is that correct?
Matthew Whitaker: What's the basis for that question, sir?
Nadler: Yes or no? Is it correct?
Whitaker: I mean, I--
Nadler: It is our understanding that at least one briefing occurred between your decision not to recuse yourself on December 19 and six days later, Christmas Day. Is that correct? Simple enough question, yes or no?
Whitaker: Mr. Chairman, again, what is the basis for your question? You're saying that it is your--
Nadler: Sir, I'm asking the questions. I only have five minutes, so please answer, yes or no.
Whitaker: No, Mr. Chairman. I'm going to-- you were asking me a question, it is your understanding-- can you tell me where you get the basis?
Nadler: No, I'm not going to tell you that. I'm don't have time to get into that. I'm just asking you if that's correct or not. Is it correct? Were you briefed in that time period between December 19 and Christmas Day? Simple question, yes or no?
Whitaker: Congressman, if every member here today asked questions based on their mere speculation--
Nadler: All right, never mind. At any point--
Whitaker: You don't have an an actual basis for your questions.
Nadler: Yes or no.
Narrator Chace: Whitaker plods slowly through every answer, taking time to pull his tiny glasses on and off his face and regularly declining to answer. Congressman, thank you for that question. Congressman, I know this is an important issue to you. And then even when he does answer, the answers are swaddled in weird, half non-answers.
Whitaker: Mr. Chairman, as I said earlier today in my opening remarks, I do not intend today to talk about my private conversations with the President of the United States. But to answer your question, I have not talked to the President of the United States about the special counsel's investigation.
Nadler: So the answer is no, thank you. To any other White House official?
Chace: The effect of this is that it's really hard to tell what really went on while Whitaker was AG, which part matters, and importantly, who's being unreasonable-- Democrats for yelling yes or no at him, or Whittaker for being obstinate? Does he know stuff and he's hiding it? Does he not know stuff, and they're berating him? I truly cannot tell.
In that exchange and the entire hearing, Nadler, the democrats and democracy clearly lost. It wasn't mixed results as Chace called it. It was a total win for Trump, Whitaker the republican party and populist authoritarianism. The American people and democracy got nothing much out of Whitaker or the hearing except a major self-inflicted wound (assuming the American people are at least partly responsible for this mess, and maybe they are).
Chase goes on to point out that the only way democrats could even make their points so that the American people could understand why there was any hearing at all was to use their precious time to describe what it was that Whitaker did and thus why they wanted him to answer their questions.
Democracy vs authoritarian kleptocracy: This is how federal governance is going to play out for the foreseeable future. The hyper-partisanship raises the question of whether democracy can survive or whether some form of corrupt Trump-populist authoritarianism will slowly engulf and destroy democracy. Given how this hearing played out, one can see the advantage that relentless authoritarianism, aided by tactics such as obstructionism, plausible deniability, doubt and dark free speech, including lies of omission (deflecting questions), has in view of weak, seemingly ineffective defenses of democracy. Time will tell how this war plays out. With any luck, the defenses will turn out to be stronger than they appear from this debacle.