Etiquette



DP Etiquette

First rule: Don't be a jackass.

Other rules: Do not attack or insult people you disagree with. Engage with facts, logic and beliefs. Out of respect for others, please provide some sources for the facts and truths you rely on if you are asked for that. If emotion is getting out of hand, get it back in hand. To limit dehumanizing people, don't call people or whole groups of people disrespectful names, e.g., stupid, dumb or liar. Insulting people is counterproductive to rational discussion. Insult makes people angry and defensive. All points of view are welcome, right, center, left and elsewhere. Just disagree, but don't be belligerent or reject inconvenient facts, truths or defensible reasoning.

Saturday, July 17, 2021

Just how human are we?

An AP article, Just 7% of our DNA is unique to modern humans, study shows, indicates that humans are a bit less human than maybe most of us thought. The AP writes:
Just 7% of our genome is uniquely shared with other humans, and not shared by other early ancestors, according to a study published Friday in the journal Science Advances.

“That’s a pretty small percentage,” said Nathan Schaefer, a University of California computational biologist and co-author of the new paper. “This kind of finding is why scientists are turning away from thinking that we humans are so vastly different from Neanderthals.”

The research draws upon DNA extracted from fossil remains of now-extinct Neanderthals and Denisovans dating back to around 40,000 or 50,000 years ago, as well as from 279 modern people from around the world.

Scientists already know that modern people share some DNA with Neanderthals, but different people share different parts of the genome. One goal of the new research was to identify the genes that are exclusive to modern humans.

It’s a difficult statistical problem, and the researchers “developed a valuable tool that takes account of missing data in the ancient genomes,” said John Hawks, a paleoanthropologist at the University of Wisconsin, Madison, who was not involved in the research.

The researchers also found that an even smaller fraction of our genome — just 1.5% — is both unique to our species and shared among all people alive today. Those slivers of DNA may hold the most significant clues as to what truly distinguishes modern human beings. 
“We can tell those regions of the genome are highly enriched for genes that have to do with neural development and brain function,” said University of California, Santa Cruz computational biologist Richard Green, a co-author of the paper.

In 2010, Green helped produce the first draft sequence of a Neanderthal genome. Four years later, geneticist Joshua Akey co-authored a paper showing that modern humans carry some remnants of Neanderthal DNA. Since then, scientists have continued to refine techniques to extract and analyze genetic material from fossils.


Chapter review: Noiseless Rules; Objective Ignorance; The Valley of the Normal

Context
This is a review of chapters 10-12 of the 2021 book, Noise: The Flaw in Human Judgment, by Nobel laureate Daniel Kahneman et al. These chapters mostly elaborate on the concepts raised in chapter 9, Judgments and Models, which was reviewed here yesterday

Noise is written for a general audience, but it relies on core concepts in statistics to make its points. The book’s main point is that humans are surprisingly noisy in their judgments. In the context of judgment or prediction science, noise refers to being randomly wrong. That kind of error is different from being wrong due to bias, which is non-random, or generally predictable error. Humans are both biased and noisy, both of which are hard to be self-aware about. People who do not understand that, cannot understand the human condition generally or politics in particular.

Some of the relevant statistics concepts are a bit subtle and/or counterintuitive, but they are explained clearly using limited technical language. Some modest understanding of relevant statistical concepts is necessary to understand why (i) most humans, especially self-proclaimed demagogues, political experts and ideologue blowhards, are usually surprisingly bad at making judgments, (ii) we are so confident in our judgments when there is no objective basis for confidence, and (iii) no machine, software or human can ever be perfect in making judgments. Regarding point (i), humans are so bad that most barely do better than random guessing most of the time, including most experts dealing with their own area of expertise. One leading prediction science expert, Philip Tetlock, summed it up like this in his 2005 book, Expert Political Judgment
“The average expert was roughly as accurate as a dart-throwing chimpanzee.”
That statement was based on about 20 years research and thousands of judgments made by hundreds of experts in various fields.[1]


Chapter 10, Noiseless Rules
Algorithms: Simple and complex models of judgment and behavior are all algorithms. The rules that algorithms are built on are noiseless. Applying algorithms to decisions applies analysis that is noise free. Algorithms can be wrong or flawed to a varying extent, ranging from barely flawed to useless, but at least the output is free of noise. The quality of an algorithm depends on how good the input rules are. If the rules are reasonably good, the algorithm output is reasonably good, usually better than human experts. If not, then not.

Algorithms do not have to be complicated. They can be based on just one or two rules, e.g., rank job candidates with high communications skill and/or a high level of motivation higher than other job candidates. One can run the numbers on that algorithm using the ancient fingers and toes technology for scoring people. It isn’t rocket science. It’s common sense. One just needs to understand the human condition regarding judgment and what algorithms are and can do.

The machines rise up and overthrow humans (The Terminator): No, that is not going to happen. Kahneman is not arguing to replace human judgment with algorithms.[2] One can base decisions on input from both humans and algorithms. Sometimes humans know things an algorithm can’t, e.g., golly, this job candidate scores really high on communications skill and motivation, but jeez, he is Larry Kudlow[1] and the job requires excellent economic judgment -- let’s not go there -- bad algorithm, bad, bad algorithm. 

Kahneman calls this the ‘broken leg’ scenario: The broken leg algorithm (rule): If someone breaks their leg during the day, they are unlikely to go to the movies anytime later that same day. Therefore, one can predict that a broken leg tells the human to override the algorithm.

Artificial intelligence (AI): Kahneman points out that humans can come up with good rules for simple but effective prediction algorithms, AI does something that humans cannot. Specifically, AI working with tons of data can spot, describe and evaluate patterns in the data that humans simply cannot see. AI can spot broken legs that humans cannot see. Some of those patterns or broken legs are useful as rules in prediction algorithms. The data here is consistent. In terms of prediction ability there is a ranked order: 
Most experts and chimpanzees < most simple algorithms < most modestly more complicated algorithms (improper linear models) < significantly more complicated algorithms (linear regression models) < most machine learning algorithms (AI models based on huge data sets that contain hidden broken legs)

Chapter 11, Objective Ignorance
This chapter gets at some concepts that have been of intense personal interest ever since I read Tetlock’s 2005 book, Expert Political Judgment,[3] and his 2012 book, Superforecasting: The Art and Science of Prediction.[3] Specifically, what is the outer limit of human knowledge, how far into the future can human knowledge project and what defines the limit? What defines the limit is lack of knowledge, in particular (i) unknown unknowns and (ii) self-inflicted ignorance, e.g., lack of moral courage needed to honestly face reality. These factors limits human projection into the future to about 24 months or less, probably mostly to about 9-15 months. Predictions farther out in time tend to fade into random guessing or chimpanzee status.

The human element -- denial of ignorance: Objective ignorance is fostered by some unfortunate human traits. In particular, humans who have to make decisions and are successful tend to gain in confidence in their ability to be right. They also tend to be expert at rationalizing failed predictions into successes or near successes. Kahneman comments:
“One review of intuition in managerial decision making defines it as ‘a judgment for a given course of action that comes to mind with an aura or conviction of rightness or plausibility, but without clearly articulated reasons or justifications -- essentially knowing but without knowing why.’ .... Confidence is not guarantee of accuracy, however, and many confident predictions turn out to be wrong. While both bias and noise contribute to prediction errors, the largest source of such errors is not the limit on how good predictive judgments are. It is the limit on how good they could be. This limit, which we call objective ignorance, is the focus of this chapter. .... In general, however, you can safely expect that people who engage in predictive tasks will underestimate their objective ignorance. Overconfidence is one of the best documented cognitive biases. .... wherever there is prediction, there is ignorance and more of it than you think.”
Kahneman goes on to point out that people who believe in the predictability of things that are simply not predictable hold an attitude that amounts to a denial of ignorance. And, he asserts that “the denial of ignorance is all the more tempting when ignorance is vast. .... human judgment will not be replaced. That is why it must be improved.”

Jeez, that’s not very comforting, especially when it is manifest in stubborn, self-centered, ignorant political and business leaders. Does flawed human judgment constitute an existential threat to modern civilization, and maybe even to the human species itself? I came to the conclusion and belief long ago that it does. Tetlock helped me see that possibility with clarity. And Kahneman reinforces it again. 


Chapter 12, The Valley of the Normal
Humans are lulled into overconfidence by routine life with few surprises. It’s a great big valley of normalcy, at least in rich countries in times of relative peace and stability. Kahneman raises a huge red flag about the limits of what the social sciences actually know, regardless of what experts say or think they know. Social science research results rarely are better than ones that allow predictions of how two variables move in tandem (concordance) ~56% of the time at most (correlation coefficient ~0.20).  Randomness is 50% concordance (correlation coefficient ~0.00, i.e., no correlation or relationship beyond random chance between measured variables). In other words, the social sciences are able to understand much less about the world it studies compared to physicists who can see concordance at ~70% (correlation coefficient ~0.60) in their data. 

Objective ignorance causes us to understand less because social science research data usually doesn't explain much about the hyper-complex, unpredictable real world.  Once again, how the human mind deals with incomprehensible reality is front and center -- we routinely see causes in things for which is there is no basis to know the cause of an event:
“More broadly, our understanding the world depends on our extraordinary ability to construct narratives that explain the events we observe. The search for causes is almost always successful because causes can be drawn from an unlimited reservoir of facts and beliefs about the world [we unconsciously apply hindsight to explain the inexplicable]. .... Genuine surprise occurs only when routine hindsight fails. This continuous interpretation of reality as it unfolds consists of the steady flow of hindsight in the valley of the normal. As we know from classic research on hindsight, even when subjective uncertainty does exist for a  while, memories of it are largely erased when the uncertainty is resolved.”

Humans routinely employ causal thinking to the world and events that we experience. We do this to limit the cognitive load needed to make sense of the routine and to spot abnormalities, especially threats. This is part of the mostly unconscious clinical thinking that humans rely on most of the time, as discussed in the review of chapter 9.  By contrast statistical and mechanical thinking (algorithmic, statistical) require discipline and lots of conscious effort. Kahneman comments:
“Relying on causal thinking about a single case is a source of predictable errors. Taking the statistical view, which we will also call the outside view,[4] is a way to avoid these errors. .... The reliance on flawed explanations is perhaps inevitable, if the alternative is to give up on understanding the world. However, causal thinking and the illusion of understanding the past contribute to overconfident predictions off the future. .... the preference for causal thinking also contributes to the neglect of noise as a source of error, because noise is a fundamentally statistical notion.”
A final concept that Kahneman makes clear here is this: Correlation does not imply causation, but causation does imply correlation. The higher the correlation, the more we understand about what we observe. We often believe we understand events, but we generally cannot predict them, implying we do not really understand them.


Footnotes: 
1. In his 2012 book, Superforecasting: The Art and Science of Prediction, Tetlock singled out two self-professed blowhards (experts) as having exceptionally awful judgment. One was Larry Kudlow, the ex-president’s Director of the National Economic Council and the other was his short-lived national security advisor, Lt. General Michael Flynn. Flynn is now a twice self-confessed and convicted (but pardoned!) felon who spews lies and slanders no gullible people for a living. Kudlow moved briskly on to host a Fox News financial program. He is proudly and confidently spewing bad advice on the poor people who listen to him mindlessly bloviate for a living. 

That is some real world evidence that bad judgment includes not being able to pick competent people for really important jobs. That exemplifies just one of the non-trivial reasons the human species finds itself in the precarious situation it is in today, i.e., too often humans exercise bad judgement.

2. But really folks, given the data, sometimes humans should be replaced by algorithms. Some humans are real stinkers in terms of judgment ability, e.g., Larry Kudlow.

3. After reading Expert Political Judgment and what it taught finally sunk in, I almost gave up on politics entirely. Humans aren't just chimpanzees, they are stubborn and arrogant chimpanzees who simply cannot handle the reality of their own deep flaws. Those cognitive and social flaws are inherent and unavoidable from human evolution. What kept me going was Tetlock’s second book, Superforecasting, which showed that some humans can learn to rise above their evolutionary heritage and actually translate knowledge into at least modestly better political outcomes.

4. Kahneman’s outside view, seems to get at what Thomas Nagel called the view from nowhere when he tried to envision reality without reliance on the reality-distorting lens the human mind and body constitutes. The idea of what unfiltered reality actually looks like is a fascinating question.

Friday, July 16, 2021

An expert on tyranny and political violence makes a prediction

Fascist GOP narrative: peaceful tourists visiting the Capitol and taking selfies 
Reality: A coup attempt on 1/6/21


In an opinion piece by Dana Milbank, the Washington Post writes:
In September, I wrote that the United States faced a situation akin to the 1933 burning of Weimar Germany’s parliament, which Hitler used to seize power.

“America, this is our Reichstag moment,” the column said, citing the eminent Yale historian Timothy Snyder on the lessons of 20th-century authoritarianism. Snyder argued that President Donald Trump had “an authoritarian’s instinct” and was surrounding the election in “the authoritarian language of a coup d’etat.” Predicted Snyder: “It’s going to be messy.”

Trump enablers such as Sen. Lindsey Graham scoffed. “With all due respect to @Milbank, he’s in the bat$hit crazy phase of Trump Derangement Syndrome,” the South Carolina Republican tweeted, with a link to my column.

But now we know that 1933 was very much on the mind of the nation’s top soldier, Gen. Mark Milley, chairman of the Joint Chiefs of Staff. “This is a Reichstag moment,” Milley told aides of Trump’s “stomach-churning” lies about election fraud. “The gospel of the Führer,” Milley labeled Trump’s claims.

Milley, as reported in a forthcoming book by The Post’s Carol Leonnig and Philip Rucker, feared that people around Trump were seeking to “overturn the government,” saw that pro-Trump protesters would serve as “brownshirts in the streets” — and was determined that “the Nazis aren’t getting in” to block Joe Biden’s inauguration.

American democracy survived that coup attempt on Jan. 6. But the danger has not subsided. I called Snyder, who accurately predicted the insurrection, to ask how the history of European authoritarianism informs our current state.

“We’re looking almost certainly at an attempt in 2024 to take power without winning election,” he told me Thursday. Recent moves in Republican-controlled state legislatures to suppress the votes of people of color and to give the legislatures control over casting electoral votes “are all working toward the scenario in 2024 where they lose by 10 million votes but they still appoint their guy.”

History also warns of greater violence. “If people are excluded from voting rights, then naturally they’re going to start to think about other options, on the one side,” Snyder said. “But, on the other side, the people who are benefiting because their vote counts for more think of themselves as entitled — and when things don’t go their way, they’re also more likely to be violent.”

The extinguishing of our Reichstag fire on Jan. 6 made Trump’s failed coup less like 1933 Germany than 1923 Germany, when Hitler’s clownish Beer Hall Putsch failed. Historically, most coup attempts fail. “But a failed coup is practice for a successful coup,” Snyder said. This is what’s ominous about the Republicans’ determination to sabotage investigations that could help us learn from the Jan. 6 insurrection. Also ominous is the move in many Republican-controlled states to ban schools from teaching about systemic racism — “memory laws,” Snyder calls them — which “feeds into this authoritarian turn” by providing cover for the new attempts to disenfranchise more non-White voters. “They’re trying to ban the discussion of things like voter suppression, and it’s precisely the history of voter suppression which allows us to see it for what it is,” Snyder said.

Two things. Sen. Lindsey Graham is stupid, corrupt and a blind fascist in the bat$hit crazy phase of T**** Ignorance Syndrome. I'm not the only one who sees the 1/6 coup attempt as a coup attempt. Just because a bunch of corrupt fascists have not yet attained the power and single party rule that they are desperately fighting for right now in the guise of the corrupt fascist Republican Party, does not mean they are not corrupt fascists. It just means they have not achieved their goals yet. 

OK, that was three things. My mistake.

Fascist GOP narrative: peaceful tourists helping the Capitol police tidy up a bit 
Reality: A coup attempt on 1/6/21



Chapter review: Judgments and Models



Context
The 2021 book, Noise: A Flaw in Human Judgment, deals with the normal human illusion that the world is a usually causally understandable and predictable place. The book was written by Nobel laureate Daniel Kahneman and two other authors. The main point of the book is to argue that noise in the real world is an undeniable and important but shockingly ignored factor in human judgment error. Judgment errors have two components, bias and noise. In the context of this book, noise is observed and measured as an innate human tendency to make errors that are not due to a known bias. With bias, one can generally predict the kinds of error people will tend to make. This is aspect of the human condition is very well-known and and generally well accepted. A few people with the moral courage to face it, even try to account for their personal biases to qualitatively and/or quantitatively reduce their errors.

On the other hand, noise in human judgment does not lead to predictable errors. That is not so well known and hard for many people to accept. It manifests as random scatter in erroneous judgments with no apparent reason other than the uniqueness of the person making the mistakes.

Even various experts in social science, medicine and commerce, who all should know better, simply blow this aspect of reality off because it is psychologically uncomfortable and/or threatening. Human egos tend to be fragile and reflexively self-defensive in the face of threat. That helps mask unpredictability and sources of causation in real life. There is an awful lot of overconfidence going on among an awful lot of people, but it may be inevitable. Most humans just don’t easily learn and practice things outside the comfort zones of their cognitive capacity or their ego.

Most of us believe most events that have happened were causally understandable, predictable and easily explained. That is sometimes true in the physical sciences, but it usually isn’t true in the social sciences and human life generally, including politics. The reasons for overconfidence to the point of illusion are now reasonably understood. They are grounded in (i) how the human mind evolved to deal with reality with as little effort as possible, and (ii) an innate human inability to see and think about the people and the world according too sets of rules, even very simple rules, that significantly (but modestly) reduce human error. 

Based on the human error rates and the reasons therefore discussed in Parts III, Noise in Predictive Judgments, and V, Improving Judgments, it seems reasonable to believe (my estimate) that if that content was mandatory subject matter in public schools and in post high school education, society maybe could avoid about 10-15% of the error and waste inherent in how we do things. In a $20 trillion/year economy, that might translate to ~$2 trillion less economic and human waste per year, maybe more. 

Chapter 9, like most of the rest of the book, relies heavily on statistics and basic statistical concepts. Because statistics is not how the human mind usually thinks, this review downplays statistical concepts and jargon as much as possible without rendering it incoherent or incorrect. 


Chapter 9: Judgments and Models
Chapter 9 focuses on why even models of reality that are so simple as to seem ridiculous are almost always better than most experts and non-experts in making real world predictive judgments. That means judgments about all kinds of things, from medical diagnoses and corporate hiring decisions to predictions of war and decisions whether to grant bail before trial to someone in jail. 

What models of reality do is take noise out of human judgments. That’s all they do. It is the only thing they do. Simple models can be flawed, but even then, they usually beat most everyone most of the time. Few people can beat models, even simple flawed ones. Why is this the case?

It is the case because humans usually apply a form of thinking that Kahneman calls “clinical judgment” to real world problems and decisions. Clinical judgment has noise in it because it is unique to the person applying it. Models do not have this variability. Applying the rules of a model to solving a problem or making a decision uniformly applies various rules relevant to the problem or decision. Kahneman calls that kind of a process “mechanical judgment.” It lacks noise because the rules are applied uniformly.

And that is where the difficulty in seeing and accepting the power of mechanical judgment compared to clinical judgment hits a brick wall. Imagine a successful, experienced doctor, or a researcher with a PhD and many peer-reviewed publications, or a high level business executive with years of training and experience being told their judgment is not as good as a simple model based on just two or three rules. It does not matter how solid the evidence is that the model is better most of the time. The model threatens ego and professional esteem.[1] Most people making judgments and predictions suffer from the illusion of validity, which is a false belief that their judgment is significantly better than it really is. One problem is that the future is usually quite uncertain and people just do not accept that reality. People believe they have the information they need to make the right judgment, but fail to understand how likely their decision is to be undone by fate. Another is that the mind tends to morph bad decisions into good ones over time by memory distorting processes. Kahneman summed it up like this:
“If you are as confident in your predictions as you are in your evaluation of cases, however, you are a victim of the illusion of validity. .... The reaction is easy to understand: Mheel’s[1] pattern contradicts the subjective experience of judgment, and most of us will trust our [often illusory] experience over a scholar’s claim.”
And, it gets even weirder. Researchers found that when they build a model of individual professionals, the model beats the professional most of the time. In other words, the model of you beats you. Models of judges beat judges, and that was based on 50 years worth of data. Kahneman comments:
“The model-of-the-judge studies reinforce Meehl's conclusion that the subtlety is largely wasted. Complexity and richness do not generally lead to more accurate predictions. .... In short, replacing you with a model of you does two things: it eliminates your subtlety and it eliminates your pattern (personal) noise.”
And, it gets even weirder than that. When random models of individuals were built and tested, not the best simple models, the random ones generally did better than the experts:
“Their striking finding was that any linear model, when applied consistently to all cases, was likely to outdo human judges predicting an outcome from the same information. .... Or, to put it bluntly, it proved almost impossible in that study to generate a simple model that did worse than the experts did.”
Poor humans. What a hot mess. Professionals tend to dislike this line of research. No one likes their judgments being called noisy. Egos get banged up and shorts are in a twist. There is much consternation, huffing, puffing and criticisms of the research. So far, all the criticisms (over 20 by now) have been fully rebutted. The research and data are sound.


Footnote: 
1. In 1954 psychology professor Paul Meehl published a book that reviewed twenty studies where there was clinical judgment and then mechanical judgment predicting how the clinical judgments turned out.  His analysis clearly indicated that mechanical beat clinical, but professionals really did not like that outcome. “Meehl discovered that clinicians and other professionals are distressingly weak in what they often see as their unique strength: the ability to integrate information.” Turns out that their perceived unique strength was their actual unique weakness.

A different kind of Cancel Culture

 So, now Joe Biden wants to do outreach to isolated communities to inform them about Covid vaccines and the Right are going apoplectic?


On top of all that, there are multiple sites on the internet telling us ways we can reach those who don't want to take the vaccine:

Then there is this:

Almost All U.S. COVID-19 Deaths Now in the Unvaccinated


Here is what I think:

There are two groups, those in minority communities that distrust the government and the medical profession.

Then there are those who are just OBSTINANT, who refuse for religious reasons, conspiracy reasons, or just plain "I don't wanna" reasons.

We should ALL do all we can to do outreach to those who are fearful of vaccines and provide them with information and support, not ridicule them.

As for the second group, as nasty as it is for me to think this, maybe they just need to die out since they are going to be the ones dying. I would avoid them like the plague, make them unwelcome, and would go one step further, anyone who ends up spreading the virus because they went out publicly unmasked and unvaccinated should be charged. That last one likely won't happen, but wish it would.

Lastly, I would do NO outreach to that second group, NONE - ZIP, they won't change their minds anyways - so why waste an ounce of breath on them?

Time to Cancel Culture that group!

Thursday, July 15, 2021

Predicting economic collapse: 1972 predictions revisited

Wheeee!!! I bought a brand new Black Smoker!
Loud, proud, does not care about the environment
and wants everyone to know how he feels

In 1972, a MIT study generated some scenarios indicating that slowed economic growth would lead to significant societal collapse or reversals in the 21st century. Collapse meant that economic growth would slow, stop and maybe even reverse. That would be accompanied by a decreasing standard of living for most people. Presumably rich folks would be, as usual, just fine, happy and rolling in dough. 

A recent reanalysis by a risk assessment wonk at KPMG (Gaya Herrington) of that original study based on current data indicates that according to two modeled scenarios, the human race is on track to basically follow the 1972 predictions. The updated study is published in the Journal of Industrial Ecology. One was the ‘BAU2’ or the business-as-usual scenario and the other was the ‘CT’ or comprehensive technology scenario. Both predicted a collapse would start sometime in the next decade or two in accord with the new analysis.


Gathering for a Black Smoker party!!

The study represents the first time a top analyst working within a mainstream global corporate entity has taken the ‘limits to growth’ [LtG] model seriously. Its author, Gaya Herrington, is Sustainability and Dynamic System Analysis Lead at KPMG in the United States. However, she decided to undertake the research as a personal project to understand how well the MIT model stood the test of time.

Titled ‘Update to limits to growth: Comparing the World3 model with empirical data’, the study attempts to assess how MIT’s ‘World3’ model stacks up against new empirical data. Previous studies that attempted to do this found that the model’s worst-case scenarios accurately reflected real-world developments. However, the last study of this nature was completed in 2014.

Herrington’s new analysis examines data across 10 key variables, namely population, fertility rates, mortality rates, industrial output, food production, services, non-renewable resources, persistent pollution, human welfare, and ecological footprint. She found that the latest data most closely aligns with two particular scenarios, ‘BAU2’ (business-as-usual) and ‘CT’ (comprehensive technology).

“BAU2 and CT scenarios show a halt in growth within a decade or so from now,” the study concludes. “Both scenarios thus indicate that continuing business as usual, that is, pursuing continuous growth, is not possible. Even when paired with unprecedented technological development and adoption, business as usual as modelled by LtG would inevitably lead to declines in industrial capital, agricultural output, and welfare levels within this century.”

Study author Gaya Herrington told Motherboard that in the MIT World3 models, collapse “does not mean that humanity will cease to exist,” but rather that “economic and industrial growth will stop, and then decline, which will hurt food production and standards of living… In terms of timing, the BAU2 scenario shows a steep decline to set in around 2040.”




Unfortunately, the scenario which was the least closest fit to the latest empirical data happens to be the most optimistic pathway known as ‘SW’ (stabilized world), in which civilization follows a sustainable path and experiences the smallest declines in economic growth—based on a combination of technological innovation and widespread investment in public health and education.



While focusing on the pursuit of continued economic growth for its own sake will be futile, the study finds that technological progress and increased investments in public services could not just avoid the risk of collapse, but lead to a new stable and prosperous civilization operating safely within planetary boundaries. But we really have only the next decade to change course.  
“The necessary changes will not be easy and pose transition challenges but a sustainable and inclusive future is still possible,” said Herrington.

The 1972 scenarios predicting bad outcomes tend to be better matches with reality in 2021, than the scenarios predicting better outcomes. Maybe this gives us a reasonable indication of what is to come if rich and powerful people, special interests and rigid ideologues keep opposing regulations to protect the environment and masses of people, just like they have been for decades.

No doubt, climate science deniers, government-hating political and Christian ideologues, crackpot conspiracy theorists and the carbon energy and chemicals sectors, e.g., Exxon-Mobile, Dow Chemical, Koch Industries, some or most transportation companies, etc., will reject any analysis like this as flawed or whatever they deem is needed to argue this into oblivion and keep profits flowing and/or the cognitive dissonance at bay from reality-based ideological disturbances. We wouldn’t want to upset anyone’s serene Feng Shui, would we?

Hm. Yup, at least some of us would love to see some upset Feng Shui among the elites, rabid ideologues and crackpots.

Anyway, the seeds of human self-destruction and long-term misery are hard wired into our mostly irrational brains by evolution. Too bad we cannot learn from science or history. So let's just blindly blunder ahead into the hardship and misery that awaits us bottom ~99%. The misery fun and games maybe starts beginning about 20 years or thereabouts from now.


Real black smokers