Etiquette



DP Etiquette

First rule: Don't be a jackass.

Other rules: Do not attack or insult people you disagree with. Engage with facts, logic and beliefs. Out of respect for others, please provide some sources for the facts and truths you rely on if you are asked for that. If emotion is getting out of hand, get it back in hand. To limit dehumanizing people, don't call people or whole groups of people disrespectful names, e.g., stupid, dumb or liar. Insulting people is counterproductive to rational discussion. Insult makes people angry and defensive. All points of view are welcome, right, center, left and elsewhere. Just disagree, but don't be belligerent or reject inconvenient facts, truths or defensible reasoning.

Tuesday, April 9, 2024

Regarding war machines without humans

In the last couple of days, PD has posted links to info about automated war machines and the use of artificial intelligence as a tool for military use in selecting targets to be destroyed. Here is some of what some of those links lead to.

This 2021 WaPo article (not behind paywall) has a great 4:22 video about US thinking behind developing machines that can seek human or non-human targets and destroy them. The thinking by US experts appears to be develop these machines as fast as possible and make them as deadly as possible. The reasoning behind this course of action is that (1) other countries will build and use killer machines regardless of what the US does, and (2) trying to ban or control killer machines by international treaty would be very hard to monitor and enforce, so don't bother trying. 

The WaPo article comments:
Picture a desert battlefield, scarred by years of warfare. A retreating army scrambles to escape as its enemy advances. [Over a desert battlefield] dozens of small drones, indistinguishable from the quadcopters used by hobbyists and filmmakers, come buzzing down from the sky, using cameras to scan the terrain and onboard computers to decide on their own what looks like a target. Suddenly they begin divebombing trucks and individual soldiers, exploding on contact.

[This is] a real scene that played out last spring as soldiers loyal to the Libyan strongman Khalifa Hifter retreated from the Turkish-backed forces of the United Nations-recognized Libyan government. According to a U.N. group of weapons and legal experts appointed to document the conflict, drones that can operate without human control “hunted down” Hifter’s soldiers as they fled.

Long the stuff of science fiction, autonomous weapons systems, known as “killer robots,” are poised to become a reality, thanks to the rapid development of artificial intelligence.

In response, international organizations have been intensifying calls for limits or even outright bans on their use. The U.N General Assembly in November adopted the first-ever resolution on these weapons systems, which can select and attack targets without human intervention.
What exactly are killer robots? To what extent are they a reality?

Killer robots, or autonomous weapons systems to use the more technical term, are systems that choose a target and fire on it based on sensor inputs rather than human inputs. They have been under development for a while but are rapidly becoming a reality. We are increasingly concerned about them because weapons systems with significant autonomy over the use of force are already being used on the battlefield.
What are the ethical concerns posed by killer robots?

The ethical concerns are very serious. Delegating life-and-death decisions to machines crosses a red line for many people. It would dehumanize violence and boil down humans to numerical values.
A July 2023 article published by MIT News focuses on efforts to "democratize" access to machine learning by vastly reducing the time cost to set up and operate AI software focused on solving specific problems: 
“It would take many weeks of effort to figure out the appropriate model for our dataset, and this is a really prohibitive step for a lot of folks that want to use machine learning or biology,” says Jacqueline Valeri, a fifth-year PhD student of biological engineering in Collins’s lab who is first co-author of the paper.

BioAutoMATED is an automated machine-learning system that can select and build an appropriate model for a given dataset and even take care of the laborious task of data preprocessing, whittling down a months-long process to just a few hours. Automated machine-learning (AutoML) systems are still in a relatively nascent stage of development, with current usage primarily focused on image and text recognition, but largely unused in subfields of biology, points out first co-author and Jameel Clinic postdoc Luis Soenksen PhD '20.  
This work was supported, in part, by a Defense Threat Reduction Agency grant, the Defense Advance Research Projects Agency SD2 program, ....
The open question here is whether this can be applied to AI mated with killer war machines. In this case AI was applied to biology, not warfare. But even if not, it is obvious that US and other global militaries are willing to spend vast amounts of money on automating war and human slaughter. That is going to happen whether the dangers are carefully considered or not.

Wikipedia on failure to regulate killer machines (LAWS or lethal autonomous weapons systems):
The Campaign to Stop Killer Robots is a coalition of non-governmental organizations who seek to pre-emptively ban lethal autonomous weapons.

First launched in April 2013, the Campaign to Stop Killer Robots has urged governments and the United Nations to issue policy to outlaw the development of lethal autonomous weapons systems, also known as LAWS. Several countries including Israel, Russia, South Korea, the United States, and the United Kingdom oppose the call for a preemptive ban, and believe that existing international humanitarian law is sufficient enough regulation for this area. 

Some photos of existing LAWS that operate on land or in the air:











US Army training with a LAWS


A long, detailed 2017 article the US Army Press published considers the moral implications of LAWS:
Pros and Cons of Autonomous Weapons Systems

Arguments in Support of Autonomous Weapons Systems

Support for autonomous weapons systems falls into two general categories. Some members of the defense community advocate autonomous weapons because of military advantages. Other supporters emphasize moral justifications for using them.

Military advantages. Those who call for further development and deployment of autonomous weapons systems generally point to several military advantages. First, autonomous weapons systems act as a force multiplier. That is, fewer warfighters are needed for a given mission, and the efficacy of each warfighter is greater. ....

Arguments Opposed to Autonomous Weapons Systems

While some support autonomous weapons systems with moral arguments, others base their opposition on moral grounds. Still others assert that moral arguments against autonomous weapons systems are misguided.

Opposition on moral grounds. In July 2015, an open letter calling for a ban on autonomous weapons was released at an international joint conference on artificial intelligence. The letter warns, “Artificial Intelligence (AI) technology has reached a point where the deployment of such systems is—practically if not legally—feasible within years, not decades, and the stakes are high: autonomous weapons have been described as the third revolution in warfare, after gunpowder and nuclear arms.”

We note in passing that it is often unclear whether a weapon is offensive or defensive. Thus, many assume that an effective missile defense shield is strictly defensive, but it can be extremely destabilizing if it allows one nation to launch a nuclear strike against another without fear of retaliation.
It seems that the US military has done a lot of thinking about automated warfare. However, the US government and the public seems to have limited understanding or influence. The process of automating war is well underway in the military. Arguably, the federal government has gone to its normal mode of inaction because it is busy with whatever else it is doing. What is government doing? Apparently, mostly continuing to blindly fund the US military, including funding for developing automated war machines, blithering and wasting time as far as I can tell.

Monday, April 8, 2024

Flying Ginsu knives: How the Gaza aid workers got killed

Lucian at the Lucian Truscott Newsletter reports:

Top-secret U.S. “Flying Ginsu” missile likely used in strike on 
World Central Kitchen vehicle in Gaza

This photograph was taken in 2017 when a U.S. drone strike killed al Qaeda deputy leader Abu Khayr al-Masri riding in a car in Syria.

This photograph was taken [on April 2, 2024] in Gaza when an Israeli missile killed seven World Central Kitchen workers riding in a small convoy in Gaza.
Notice that the roofs of both vehicles have nearly identical holes in nearly the same location and that there is no other apparent damage to the vehicle. The windshield of the World Central Kitchen vehicle isn’t damaged. The windshield of the al Qaeda vehicle in Syria is cracked, but both vehicles still have their windshield wipers intact. The doors of both vehicles can still be opened.
Notice there is very little damage to the doors of either vehicle, and yet everyone riding in both of them was killed immediately.

There is a very strong likelihood that the same sort of missile was used in both attacks. It is a modified version of the American Hellfire missile called the R9X. It does not carry an explosive warhead, but rather uses its 100 pound weight and the speed at which it is traveling to penetrate its target. Then it comes apart, deploying six steel blades that whirl around destroying and killing anything in their path. The nature of the blades allows them to penetrate soft material like cloth and flesh, while leaving the hard exterior of the vehicles nearly unscathed.
Israeli President Benjamin Netanyahu called the strike in Gaza “unintentional” and said the incident will be “thoroughly investigated.” 

The U.S. State Department approved the transfer of more than two thousand bombs to Israel on the day the aid workers were killed in Gaza. A report in the Washington Post said that the weapons included “over 1,000 small diameter bombs.” The modified Hellfire known as the R9X is a little over five feet long and has a diameter of just seven inches, which could put it in the category of the “small diameter bombs” that are approved for transfer to Israel.
Humans are really good at finding interesting ways to do things like killing humans, making species go extinct, wrecking the environment and subverting democracies and replacing them with kleptocratic dictatorships, theocracies and/or plutocracies.

How AI elites spin the promise of AI and how the military can use it

The first video by Jon Stewart pieces together the reasoning and arguments that AI (artificial intelligence software) will be a good thing and there is nothing to worry about. The AI segment starts at about 3:15 into the video. The comedy in it is great. The second video is a news report about how the Israeli military appears to be using an AI program named Lavender to target and kill Hamas fighters along with their families. There is nothing funny in the 2nd video. That video was brought to my attention by PD in his post from yesterday.







Together, these two videos give us a feel for how AI is going to be employed and how dark effects and dark uses of AI will be propagandized and/or hidden to the extent that people in power can spin and hide what is going on. The Israeli government is likely going to either deny the existence of Lavender or that it is used indiscriminately. 

Why post these two videos together?
Because this is important information. People really need to know at least something about how AI is going to be used, whether we like it or not. And, enquiring minds just want to know. 

It is probable that our captured and broken federal government is incapable of dealing with AI responsibly. We will very likely (~97% chance in the next two years?) be mostly left to (1) the whims of people like the cynical and transparently mendacious AI billionaires that Stewart interviewed, and (2) the brutality of authoritarian governments like Israel. 

The WaPo reported about this a couple of days ago:
It’s hard to concoct a more airy sobriquet than this one. A new report published by +972 magazine and Local Call indicates that Israel has allegedly used an AI-powered database to select suspected Hamas and other militant targets in the besieged Gaza Strip. According to the report, the tool, trained by Israeli military data scientists, sifted through a huge trove of surveillance data and other information to generate targets for assassination. It may have played a major role particularly in the early stages of the current war, as Israel conducted relentless waves of airstrikes on the territory, flattening homes and whole neighborhoods. At present count, according to the Gaza Health Ministry, more than 33,000 Palestinians, the majority being women and children, have been killed in the territory.

The AI tool’s name? “Lavender.”   
“During the early stages of the war, the army gave sweeping approval for officers to adopt Lavender’s kill lists, with no requirement to thoroughly check why the machine made those choices or to examine the raw intelligence data on which they were based,” Abraham wrote.
The use of AI technology is still only a small part of what has troubled human rights activists about Israel’s conduct in Gaza. But it points to a darker future. Lavender, observed Adil Haque, an expert on international law at Rutgers University, is “the nightmare of every international humanitarian lawyer come to life.”

Color coded targets that AI can
choose to obliterate

Sunday, April 7, 2024

Israel used military AI Program to select targets in Gaza according to new expose

 (WaPo: 4/7/24)

by Ishaan Tharoor

This week, Israeli journalist and filmmaker Yuval Abraham published a lengthy expose on the existence of the Lavender program and its implementation in the Israeli campaign in Gaza that followed Hamas’s deadly Oct. 7 terrorist strike on southern Israel. Abraham’s reporting — which appeared in +972 magazine, a left-leaning Israeli English-language website, and Local Call, its sister Hebrew-language publication — drew on the testimony of six anonymous Israeli intelligence officers, all of whom served during the war and had “first-hand involvement” with the use of AI to select targets for elimination. According to Abraham, Lavender identified as many as 37,000 Palestinians — and their homes — for assassination. (The IDF denied to the reporter that such a “kill list” exists, and characterized the program as merely a database meant for cross-referencing intelligence sources.) White House national security spokesperson John Kirby told CNN on Thursday that the United States was looking into the media reports on the apparent AI tool.

“During the early stages of the war, the army gave sweeping approval for officers to adopt Lavender’s kill lists, with no requirement to thoroughly check why the machine made those choices or to examine the raw intelligence data on which they were based,” Abraham wrote.

“One source stated that human personnel often served only as a ‘rubber stamp’ for the machine’s decisions, adding that, normally, they would personally devote only about ‘20 seconds’ to each target before authorizing a bombing — just to make sure the Lavender-marked target is male,” he added. “This was despite knowing that the system makes what are regarded as ‘errors’ in approximately 10 percent of cases, and is known to occasionally mark individuals who have merely a loose connection to militant groups, or no connection at all.”

This may help explain the scale of destruction unleashed by Israel across Gaza as it seeks to punish Hamas, as well as the high casualty count. Earlier rounds of Israel-Hamas conflict saw the Israel Defense Forces go about a more protracted, human-driven process of selecting targets based on intelligence and other data. At a moment of profound Israeli anger and trauma in the wake of Hamas’s Oct. 7 attack, Lavender could have helped Israeli commanders come up with a rapid, sweeping program of retribution.

“We were constantly being pressured: ‘Bring us more targets.’ They really shouted at us,” said one intelligence officer, in testimony published by Britain’s Guardian newspaper, which obtained access to the accounts first surfaced by +972.

Many of the munitions Israel dropped on targets allegedly selected by Lavender were “dumb” bombs — heavy, unguided weapons that inflicted significant damage and loss of civilian life. According to Abraham’s reporting, Israeli officials didn’t want to “waste” more expensive precision-guided munitions on the many junior-level Hamas “operatives” identified by the program. And they also showed little squeamishness about dropping those bombs on the buildings where the targets’ families slept, he wrote.

“We were not interested in killing [Hamas] operatives only when they were in a military building or engaged in a military activity,” A, an intelligence officer, told +972 and Local Call. “On the contrary, the IDF bombed them in homes without hesitation, as a first option. It’s much easier to bomb a family’s home. The system is built to look for them in these situations.”

Widespread concerns about Israel’s targeting strategies and methods have been voiced throughout the course of the war. “It is challenging in the best of circumstances to differentiate between valid military targets and civilians” there, Brian Castner, senior crisis adviser and weapons investigator at Amnesty International, told my colleagues in December. “And so just under basic rules of discretion, the Israeli military should be using the most precise weapons that it can that it has available and be using the smallest weapon appropriate for the target.

In reaction to the Lavender revelations, the IDF said in a statement that some of Abraham’s reporting was “baseless” and disputed the characterization of the AI program. It is “not a system, but simply a database whose purpose is to cross-reference intelligence sources, in order to produce up-to-date layers of information on the military operatives of terrorist organizations,” the IDF wrote in a response published in the Guardian.

“The IDF does not use an artificial intelligence system that identifies terrorist operatives or tries to predict whether a person is a terrorist,” it added. “Information systems are merely tools for analysts in the target identification process.”

This week’s incident involving an Israeli drone strike on a convoy of vehicles belonging to World Central Kitchen, a prominent food aid group, killing seven of its workers, sharpened the spotlight on Israel’s conduct of the war. In a phone call with Israeli Prime Minister Benjamin Netanyahu on Thursday, President Biden reportedly called on Israel to change course and take demonstrable steps to better preserve civilian life and enable the flow of aid.

Separately, hundreds of prominent British lawyers and judges submitted a letter to their government, urging a suspension of arms sales to Israel to avert “complicity in grave breaches of international law.”

The use of AI technology is still only a small part of what has troubled human rights activists about Israel’s conduct in Gaza. But it points to a darker future. Lavender, observed Adil Haque, an expert on international law at Rutgers University, is “the nightmare of every international humanitarian lawyer come to life.”

 ______________________________________________________________________

 Yuval Abraham  wrote the following note on +972 Magazine in which he summarizes the impact of his report as of 4/5/24, with lots of links for those who want to pursue the unfolding story in greater depth, tracking the responses of governments, journalists, NGOs etc. I am pasting it below.

I broke a major story two days ago. Here’s what it has done so far

This week, we at +972 Magazine and Local Call published a huge story that I’ve been working on for a long time. Our new investigation reveals that the Israeli army has developed an artificial intelligence-based program called “Lavender,” which has been used to mark tens of thousands of Palestinians as suspected militants for potential assassination during the current Gaza war.

According to six whistleblowers interviewed for the article, Lavender has played a central role in the Israeli army’s unprecedented bombing of Palestinians since October 7, especially during the first weeks of the war. In fact, according to the sources, the army gave sweeping approval for soldiers to adopt Lavender’s kill lists with little oversight, and to treat the outputs of the AI machine “as if it were a human decision.” While the machine was designed to mark “low level” military operatives, it was known to make what were considered identification “errors” in roughly 10 percent of cases. This was accompanied by a systematic preference to strike Lavender-marked targets while they were in their family homes, along with an extremely permissive policy toward casualties, which led to the killings of entire Palestinian families.

At the political level, meanwhile, White House National Security spokesperson John Kirby noted that the United States was examining the contents of our investigation. Palestinian parliamentarian Aida Touma-Suleiman cited sections of our report in a speech at the Knesset. UN Secretary General António Guterres expressed that he was “deeply troubled” by our findings, adding “No part of life and death decisions which impact entire families should be delegated to the cold calculation of algorithms.”

It has been meaningful to see so many readers praising our investigation as one of the most important works of journalism in the war. And we have much more we want to do.

I personally hope that this exposé will help bring us a step closer toward ending this terrible war and confronting the violent systems that enable injustice here in Israel-Palestine. I’m grateful to you for reading our investigation, and for supporting the work that journalists like myself are doing at +972 Magazine

 

_____________________________________________

Here is a link to a video from Democracy Now that covers the story in some depth.


 

 

Brass knuckles capitalism at work in America's overpriced heath care system

“Social responsibility is a fundamentally subversive doctrine in a free society, and have said that in such a society, there is one and only one social responsibility of business–to use it resources and engage in activities designed to increase its profits so long as it stays within the rules of the game, which is to say, engages in open and free competition without deception or fraud.” ― Nobel Prize laureate Milton Friedman, The Ethics of Competition and Other Essays, 1969

When the rules of the capitalism game exclude only unfree competition, deception and fraud, and businesses buy the legal definitions of unfree competition, deception and fraud from our corrupted pay-to-play system of politics and government, one should expect little to no concern for social responsibility from businesses. What little concern there may be amounts to a public relations propaganda problem. Such problems are almost always addressed by (1) deceiving, distracting and/or confusing the public as much as possible, and/or (2) buying more effective social responsibility-subverting laws from government. ― Blogger Germaine, Dissident Politics blog post, 2024  

The NYT reports (not paywalled for 30 days) about a quiet, obscure little data analytics company, MultiPlan. MultiPlan is shifting unknown billions of dollars in health care costs from insurers to consumers. The lead example is a woman who used an out of network doctor to treat a serious infection. Insurance paid the doctor $5,449.27 and billed the woman more than $100,000. That was based on a “fair and independent analysis” as MultiPlan and the insurance companies define the concepts of fair, independent and analysis. That amounts to a fig leaf over a very nasty thing to look at.
Insurers Reap Hidden Fees by Slashing Payments. 
You May Get the Bill.

A little-known data firm helps health insurers make more when less of an out-of-network claim gets paid. Patients can be on the hook for the difference.

The answer is a little-known data analytics firm called MultiPlan. It works with UnitedHealthcare, Cigna, Aetna and other big insurers to decide how much so-called out-of-network medical providers should be paid. It promises to help contain medical costs using fair and independent analysis.

But a New York Times investigation, based on interviews and confidential documents, shows that MultiPlan and the insurance companies have a large and mostly hidden financial incentive to cut those reimbursements as much as possible, even if it means saddling patients with large bills. The formula for MultiPlan and the insurance companies is simple: The smaller the reimbursement, the larger their fee.

But when employees see a provider outside the network, as Ms. Lawson did, many insurance companies consult with MultiPlan, which typically recommends that the employer pay less than the provider billed. The difference between the bill and the sum actually paid amounts to a savings for the employer. But, The Times found, it means big money for MultiPlan and the insurer, since both companies often charge the employer a percentage of the savings as a processing fee.

Can you see the incentive to shift
costs to the consumer?


Note the sentence in the NYT article: It promises to help contain medical costs using fair and independent analysis. Think about that a moment. How does shifting already outrageously high medical costs from insurance companies and health care providers to powerless consumers contain medical costs? It doesn’t. Instead, it incentivizes increasing costs by shifting costs to consumers who cannot do anything about it because health care is usually a necessity, not an option. It is exactly like requiring all drivers to have car insurance. All the power is with the insurance companies. That power incentivizes them to squeeze every possible penny out of every consumer.  

The point of this post is to again point out that brass knuckles private sector capitalism, including health care, mostly operates with one and only one moral imperative, profit lust. There are some exceptions, but that generally tends to be in smaller businesses. Incentives that increase profit are effective, regardless of harm or cost to consumers. The more risk and cost that gets shifted to consumers, the higher the profit. Nowhere in this does the strange concept of social responsibility or conscience come into play for brass knuckles capitalists. 

With brass knuckles capitalism, the concept of social responsibility exists only as an evil thing to be minimized as much as possible. Only when government regulates for the benefit of consumers does social responsibility have some effect.

Saturday, April 6, 2024

Another Middle East war front opens?; The hopelessness of the Republican Party

Israel bombed part of the Iranian embassy in Damascus, Syria. The consular building, whatever that is, was leveled, but the main embassy remains intact. Experts expect that Iran will pick a place and time retaliate in kind. That could lead to the US getting into some kind of a proxy or otherwise war with Russia somewhere in the Middle East. The AP reported:
Israel has grown increasingly impatient with the daily exchanges of fire with Hezbollah, which have escalated in recent days, and warned of the possibility of a full-fledged war. Iranian-backed Houthi rebels in Yemen have also been launching long-range missiles toward Israel, including on Monday.

While Iran’s consular building was leveled in the attack, according to Syria’s state news agency, its main embassy building remained intact. Still, the Iranian ambassador’s residence was inside the consular building.

Iran’s ambassador, Hossein Akbari, vowed revenge for the strike “at the same magnitude and harshness.”

Hamas and Islamic Jihad — another Palestinian militant group backed by Iran — accused Israel of seeking to widen the conflict in Gaza.

Experts said there was no doubt that Iran would retaliate. The strike in Syria was a “major escalation,” Charles Lister, a Syria expert at the Middle East Institute in Washington, said on the social media platform X.
Another source commentedIs Israel's plan to draw the US into a war with Iran? -- Iranian leaders will feel heavy pressure to respond forcefully. The extent of that pressure can be appreciated by imagining if the roles were reversed. If Iran had bombed an embassy of Israel or the United States, a violent and lethal response would be not just expected but demanded by politicians and publics alike. .... Speaking a day after the attack, Iranian Supreme Leader Ali Khamenei vowed revenge and said “Israel will be punished.” The Iranian representative at the United Nations Security Council asserted Iran’s right to a “decisive response to such reprehensible acts.”

Maybe it is time to move the nuclear Armageddon clock ahead a few seconds. The nuclear Armageddon experts opine
It is still 90 seconds to midnight -- 2024 Doomsday Clock Statement 
The war in Ukraine and the widespread and growing reliance on nuclear weapons increase the risk of nuclear escalation. China, Russia, and the United States are all spending huge sums to expand or modernize their nuclear arsenals, adding to the ever-present danger of nuclear war through mistake or miscalculation. .... And the war in Gaza between Israel and Hamas has the potential to escalate into a wider Middle Eastern conflict that could pose unpredictable threats, regionally and globally.
If the US had forced Israel and the Palestinians into a peace agreement decades ago, we very likely would not be in this mess today. But here we are, like it or not.

Fire the Democratic and Republican Parties!
_________________________________________________________________
_________________________________________________________________

It is time to squarely face what has been clear for at least the last 7 years. Most of the GOP leadership is staunchly authoritarian, corrupt and completely morally rotted. (Most of the rank and file supports it all, so what does that make them? Patriots?) Nearly all of the pro-democracy, pro-facts Repub elites that were left in 2017 have either been RINO hunted out of power, radicalized into authoritarianism, or left the party.

A WaPo article (not paywalled off) about Russian influence on GOP elites exemplifies the hopelessness: 
When a top Republican says Russian propaganda has infected the GOP

House Foreign Affairs Committee Chairman Michael McCaul is the latest to point out such a problem in his party

[In 2019] former Trump national security aide Fiona Hill made an extraordinary plea. Seated in front of congressional Republicans, she implored them not to spread Russian propaganda.

“In the course of this investigation, I would ask that you please not promote politically driven falsehoods that so clearly advance Russian interests,” she told them. She was referring to comments they had made during her earlier deposition breathing life into a baseless, Trump-backed suggestion that Ukraine, rather than Russia, interfered in the 2016 U.S. election.

“These fictions are harmful even if they’re deployed for purely domestic political purposes,” she added.

Republicans on the [first Trump impeachment inquiry] committee blanched at the suggestion that they had served as conduits for Russian misinformation, but Hill refused to back down.

Five years later, Republicans are starting to grapple more publicly with the idea that this kind of thing is happening in their ranks.

The most striking example came this week. In an interview with Puck News’s Julia Ioffe, Rep. Michael McCaul (R-Tex.) — none other than the GOP chairman of the House Foreign Affairs Committee — flat-out said that Russian propaganda had “infected a good chunk of my party’s base.”

McCaul suggested conservative media was to blame.

“There are some more nighttime entertainment shows that seem to spin, like, I see the Russian propaganda in some of it — and it’s almost identical [to what they’re saying on Russian state television] — on our airwaves,” McCaul said.

He also cited “these people that read various conspiracy-theory outlets that are just not accurate, and they actually model Russian propaganda.” 
Former congresswoman Liz Cheney (R-Wyo.) said there is now “a Putin wing of the Republican Party.”

In 2022, Sen. Mitt Romney (R-Utah) called the pro-Putin sentiments in some corners of his party “almost treasonous,” while allowing that perhaps his fellow Republicans were just attention-seekers.  
What may be the most famous example: when House GOP leaders in 2016 privately joked about Trump being compromised by Russia, as later reported by The Washington Post.

The day after The Post broke the news that the Russians had hacked the Democratic National Committee, then-House Majority Leader Kevin McCarthy (R-Calif.) quipped that perhaps Russia had gotten Democrats’ opposition research about Trump.

“There’s two people, I think, Putin pays,” McCarthy added, “[Rep. Dana] Rohrabacher and Trump.” (Rohrabacher was an openly pro-Russian Republican from California.)

Then-House Speaker Paul D. Ryan (R-Wis.) quickly tried to steer the conversation in another direction and urged people to be discreet. [Yeah, God forbid House Republicans are honest with the American people, fools and dupes that they are -- Hey Speaker Ryan! Thank you for your service  /s]
Mitt exemplifies infantile, crackpot reasoning: “Almost treasonous?” Really? One can only wonder what Mitt considers to be actually treasonous. Just attention-seeking by elected politicians knowingly spouting Russian anti-American propaganda is not treasonous? They have no moral responsibility to at least try to speak truth? That reasoning is roughly the same saying that idiots who shoot off lots of fireworks in a tinder-dry forest on a very windy day are almost arsonists. The level of reasoning that most GOP elites usually bring to the table is grade-school level drivel.**

The dreadful reality: After all this time and all the lies, bile and hate that has come from GOP elites, it is not likely that they will back away from Russian lies and made-up conspiracy theories. They just have too much invested in staying the course, like many or most stay the course with the stolen 2020 election lie. The odds of the Republican elites changing course and being honest with the public in the next year seems to be low, maybe a 0.1% chance (1 in 1,000).  

** Grade-school level drivel arguably is the norm, not the exception. From the book Democracy for Realists: Why Elections do not Produce Responsive Governments:

“. . . . the typical citizen drops down to a lower level of mental performance as soon as he enters the political field. He argues and analyzes in a way which he would readily recognize as infantile within the sphere of his real interests. . . . cherished ideas and judgments we bring to politics are stereotypes and simplifications with little room for adjustment as the facts change. . . . . the real environment is altogether too big, too complex, and too fleeting for direct acquaintance. We are not equipped to deal with so much subtlety, so much variety, so many permutations and combinations. Although we have to act in that environment, we have to reconstruct it on a simpler model before we can manage it.”

GOP elites have that infantile analyzing and arguing thing down pat. Good job GOP elites, you cynical, lying, morally rotted pieces of authoritarian crap! You too Mitt.