Etiquette



DP Etiquette

First rule: Don't be a jackass.

Other rules: Do not attack or insult people you disagree with. Engage with facts, logic and beliefs. Out of respect for others, please provide some sources for the facts and truths you rely on if you are asked for that. If emotion is getting out of hand, get it back in hand. To limit dehumanizing people, don't call people or whole groups of people disrespectful names, e.g., stupid, dumb or liar. Insulting people is counterproductive to rational discussion. Insult makes people angry and defensive. All points of view are welcome, right, center, left and elsewhere. Just disagree, but don't be belligerent or reject inconvenient facts, truths or defensible reasoning.

Tuesday, April 9, 2024

If you were asked to be on the jury at a Trump trial

 I don't need to rehash everything that has been said about how impossible it will be to find an impartial juror for any of Trump's trials. But since the NY hush money case is the first to be tried, and jury selection has begun, let's use that one as an example.

I also don't need to rehash all the worries that all it will take is for one pro-Trumper to answer all the questions put to him or her in a manner that makes them acceptable only to be the one that hangs a jury.\

I instead gave the following questions a lot of thought and was prompted to those thoughts by a radio program I was listening this morning hosted by Michael Smerconish.

Let me ask the questions straight up without the political slant.

1. Do you have strong feelings one way or another about the guilty or innocence of charges against Trump, before hearing the evidence or taking part in the trial?

2. Do you have strong personal feelings about Trump, whether for or against, that would disqualify you from being an impartial juror?

3. Regardless of your personal feelings about the charges or about Trump, whether for or against, could you nevertheless render a verdict SOLELY based on the evidence presented at trial and the instructions given to you by the judge?

4. Are you impartial towards the charges and Trump himself. Are you neither for or against the charges or Trump. Are you going into this trial with a  totally impartial view and so are not tainted by personal feelings about the charges or about Trump?

Here is how I would have answered - to the best of my ability.

1. I think the hush money charges are rather weak so I have an already preconceived notion. But I am nevertheless willing to listen to the evidence.

2. I detest the man, but do not believe my detestation would disqualify me. I feel I could put my personal dislike of the man aside.

3. I believe I COULD render the verdict SOLELY based on the evidence and the judge's instructions. Even if it means that my verdict is in conflict with my personal feelings. I believe I could follow the evidence and the law. 

4. I am not impartial. I have strong feelings that may or may not guide how I react to the trial. So my answer here is NO, I am not impartial but hope I can perform my duty as juror as explained in my answer #3.

I suspect on those answers I would be disqualified but I would look at those answers and consider the person giving those answers as at least being honest and I would be tempted to accept them as a juror.

AND NO, I am not saying that to toot my own horn, but to raise the question: How would YOU answer those questions, and what answers would YOU find acceptable in accepting a juror?


Global warming updates: Reconductoring the grid; Financial fizzle

A NYT article reports about the need to update the electrical grid (not paywalled off for 30 days):
A rarely used technique to upgrade old power lines could play a big role in fixing one of the largest obstacles facing clean energy, two reports found.

Replacing existing power lines with cables made from state-of-the-art materials could roughly double the capacity of the electric grid in many parts of the country, making room for much more wind and solar power.

This technique, known as “advanced reconductoring,” is widely used in other countries. But many U.S. utilities have been slow to embrace it because of their unfamiliarity with the technology as well as regulatory and bureaucratic hurdles, researchers found.

“We were pretty astonished by how big of an increase in capacity you can get by reconductoring,” said Amol Phadke, a senior scientist at the University of California, Berkeley, who contributed to one of the reports released Tuesday. Working with GridLab, a consulting firm, researchers from Berkeley looked at what would happen if advanced reconductoring were broadly adopted. 

Today, most power lines consist of steel cores surrounded by strands of aluminum, a design that’s been around for a century. In the 2000s, several companies developed cables that used smaller, lighter cores such as carbon fiber and that could hold more aluminum. These advanced cables can carry up to twice as much current as older models.


Experts broadly agree that the sluggish build-out of the electric grid is the Achilles’ heel of the transition to cleaner energy. The Energy Department estimates that the nation’s network of transmission lines may need to expand by two-thirds or more by 2035 to meet President Biden’s goals to power the country with clean energy.

But building transmission lines has become a brutal slog, and it can take a decade or more for developers to site a new line through multiple counties, receive permission from a patchwork of different agencies and address lawsuits about spoiled views or damage to ecosystems. Last year, the United States added just 251 miles of high-voltage transmission lines, a number that has been declining for a decade.  
Countries like Belgium and the Netherlands have been widely deploying advanced conductors in order to integrate more wind and solar power, said Emilia Chojkiewicz, one of the authors of the Berkeley report.

“We talked with the transmission system planners over there and they all said this is a no-brainer,” Ms. Chojkiewicz said. “It’s often difficult to get new rights of way for lines, and reconductoring is much faster.”
One can rationally consider this to be (1) another major failure of American governments to act, and (2) how the private sector does things. We leave most everything to brass knuckles capitalist markets running wild, free, butt naked and unaccountable. Meanwhile, neutered governments dither, blither and slither their way under rocks for protection from accountability. Everyone with power is asleep at the switch.


Another NYT article reports about how banks are not doing much to deal with global warming despite pledges to do something in 2021 (not paywalled):
Banks Made Big Climate Promises. A New Study Doubts They Work.

Using European Central Bank lending data, researchers said there was not evidence that voluntary commitments were effective in reducing emissions.

Hundreds of banks, insurers and asset managers vowed to plow $130 trillion in capital into reducing carbon emissions and financing the energy transition as they introduced the Glasgow Financial Alliance for Net Zero. But a recent study, published by the European Central Bank, disputed the effectiveness of those promises.

“Our results cast doubt on the efficacy of voluntary climate commitments for reducing financed emissions, whether through divestment or engagement,” wrote economists from the central bank, the Massachusetts Institute of Technology and Columbia Business School who analyzed lending by European banks that had signed on to the Net-Zero Banking Alliance, the banking group of the Glasgow initiative.

The researchers found that since 2018 the banks had reduced lending 20 percent to sectors they had targeted in their climate goals, such as oil and gas and transport. That seems like progress, but the researchers argued it was not sufficient because the decline was the same for banks that had not made the same commitment.

“It’s not OK for the net-zero bank to act exactly like the non-net-zero bank, because we need that to scale up financing,” said Parinitha Sastry, an assistant professor of finance at Columbia Business School and one of the paper’s authors. “We want there to be a behavioral change.”

Expectations for banks from policymakers and climate activists are high. Every year trillions of dollars need to be invested in clean energy if the world is to reach net-zero carbon emissions by 2050, according to the International Energy Agency. Most of that cost will need to be financed privately, and banks are the key facilitators in those deals.

Many banks clamored to make net-zero pledges around the summit in Glasgow, known as COP26. But as pressure builds to lower emissions, climate activists are concerned about waning commitments from banks because of mounting political pressure, demand for cheap energy and shifting geopolitical alliances.  
GLS, a German bank, pulled out as a founding member of the Net-Zero Banking Alliance last year after a report by European nonprofit groups said the largest banks in the alliance had funneled $270 billion into fossil fuel expansions since they joined.

“What sense does it make to be in an alliance like that?” said Antje Tönnis, a spokeswoman for GLS. “Plus, it is a fair bit of work. Reporting is involved but doesn’t have any consequences.”

Climate science deniers deny the science

Regarding war machines without humans

In the last couple of days, PD has posted links to info about automated war machines and the use of artificial intelligence as a tool for military use in selecting targets to be destroyed. Here is some of what some of those links lead to.

This 2021 WaPo article (not behind paywall) has a great 4:22 video about US thinking behind developing machines that can seek human or non-human targets and destroy them. The thinking by US experts appears to be develop these machines as fast as possible and make them as deadly as possible. The reasoning behind this course of action is that (1) other countries will build and use killer machines regardless of what the US does, and (2) trying to ban or control killer machines by international treaty would be very hard to monitor and enforce, so don't bother trying. 

The WaPo article comments:
Picture a desert battlefield, scarred by years of warfare. A retreating army scrambles to escape as its enemy advances. [Over a desert battlefield] dozens of small drones, indistinguishable from the quadcopters used by hobbyists and filmmakers, come buzzing down from the sky, using cameras to scan the terrain and onboard computers to decide on their own what looks like a target. Suddenly they begin divebombing trucks and individual soldiers, exploding on contact.

[This is] a real scene that played out last spring as soldiers loyal to the Libyan strongman Khalifa Hifter retreated from the Turkish-backed forces of the United Nations-recognized Libyan government. According to a U.N. group of weapons and legal experts appointed to document the conflict, drones that can operate without human control “hunted down” Hifter’s soldiers as they fled.

Long the stuff of science fiction, autonomous weapons systems, known as “killer robots,” are poised to become a reality, thanks to the rapid development of artificial intelligence.

In response, international organizations have been intensifying calls for limits or even outright bans on their use. The U.N General Assembly in November adopted the first-ever resolution on these weapons systems, which can select and attack targets without human intervention.
What exactly are killer robots? To what extent are they a reality?

Killer robots, or autonomous weapons systems to use the more technical term, are systems that choose a target and fire on it based on sensor inputs rather than human inputs. They have been under development for a while but are rapidly becoming a reality. We are increasingly concerned about them because weapons systems with significant autonomy over the use of force are already being used on the battlefield.
What are the ethical concerns posed by killer robots?

The ethical concerns are very serious. Delegating life-and-death decisions to machines crosses a red line for many people. It would dehumanize violence and boil down humans to numerical values.
A July 2023 article published by MIT News focuses on efforts to "democratize" access to machine learning by vastly reducing the time cost to set up and operate AI software focused on solving specific problems: 
“It would take many weeks of effort to figure out the appropriate model for our dataset, and this is a really prohibitive step for a lot of folks that want to use machine learning or biology,” says Jacqueline Valeri, a fifth-year PhD student of biological engineering in Collins’s lab who is first co-author of the paper.

BioAutoMATED is an automated machine-learning system that can select and build an appropriate model for a given dataset and even take care of the laborious task of data preprocessing, whittling down a months-long process to just a few hours. Automated machine-learning (AutoML) systems are still in a relatively nascent stage of development, with current usage primarily focused on image and text recognition, but largely unused in subfields of biology, points out first co-author and Jameel Clinic postdoc Luis Soenksen PhD '20.  
This work was supported, in part, by a Defense Threat Reduction Agency grant, the Defense Advance Research Projects Agency SD2 program, ....
The open question here is whether this can be applied to AI mated with killer war machines. In this case AI was applied to biology, not warfare. But even if not, it is obvious that US and other global militaries are willing to spend vast amounts of money on automating war and human slaughter. That is going to happen whether the dangers are carefully considered or not.

Wikipedia on failure to regulate killer machines (LAWS or lethal autonomous weapons systems):
The Campaign to Stop Killer Robots is a coalition of non-governmental organizations who seek to pre-emptively ban lethal autonomous weapons.

First launched in April 2013, the Campaign to Stop Killer Robots has urged governments and the United Nations to issue policy to outlaw the development of lethal autonomous weapons systems, also known as LAWS. Several countries including Israel, Russia, South Korea, the United States, and the United Kingdom oppose the call for a preemptive ban, and believe that existing international humanitarian law is sufficient enough regulation for this area. 

Some photos of existing LAWS that operate on land or in the air:











US Army training with a LAWS


A long, detailed 2017 article the US Army Press published considers the moral implications of LAWS:
Pros and Cons of Autonomous Weapons Systems

Arguments in Support of Autonomous Weapons Systems

Support for autonomous weapons systems falls into two general categories. Some members of the defense community advocate autonomous weapons because of military advantages. Other supporters emphasize moral justifications for using them.

Military advantages. Those who call for further development and deployment of autonomous weapons systems generally point to several military advantages. First, autonomous weapons systems act as a force multiplier. That is, fewer warfighters are needed for a given mission, and the efficacy of each warfighter is greater. ....

Arguments Opposed to Autonomous Weapons Systems

While some support autonomous weapons systems with moral arguments, others base their opposition on moral grounds. Still others assert that moral arguments against autonomous weapons systems are misguided.

Opposition on moral grounds. In July 2015, an open letter calling for a ban on autonomous weapons was released at an international joint conference on artificial intelligence. The letter warns, “Artificial Intelligence (AI) technology has reached a point where the deployment of such systems is—practically if not legally—feasible within years, not decades, and the stakes are high: autonomous weapons have been described as the third revolution in warfare, after gunpowder and nuclear arms.”

We note in passing that it is often unclear whether a weapon is offensive or defensive. Thus, many assume that an effective missile defense shield is strictly defensive, but it can be extremely destabilizing if it allows one nation to launch a nuclear strike against another without fear of retaliation.
It seems that the US military has done a lot of thinking about automated warfare. However, the US government and the public seems to have limited understanding or influence. The process of automating war is well underway in the military. Arguably, the federal government has gone to its normal mode of inaction because it is busy with whatever else it is doing. What is government doing? Apparently, mostly continuing to blindly fund the US military, including funding for developing automated war machines, blithering and wasting time as far as I can tell.

Monday, April 8, 2024

Flying Ginsu knives: How the Gaza aid workers got killed

Lucian at the Lucian Truscott Newsletter reports:

Top-secret U.S. “Flying Ginsu” missile likely used in strike on 
World Central Kitchen vehicle in Gaza

This photograph was taken in 2017 when a U.S. drone strike killed al Qaeda deputy leader Abu Khayr al-Masri riding in a car in Syria.

This photograph was taken [on April 2, 2024] in Gaza when an Israeli missile killed seven World Central Kitchen workers riding in a small convoy in Gaza.
Notice that the roofs of both vehicles have nearly identical holes in nearly the same location and that there is no other apparent damage to the vehicle. The windshield of the World Central Kitchen vehicle isn’t damaged. The windshield of the al Qaeda vehicle in Syria is cracked, but both vehicles still have their windshield wipers intact. The doors of both vehicles can still be opened.
Notice there is very little damage to the doors of either vehicle, and yet everyone riding in both of them was killed immediately.

There is a very strong likelihood that the same sort of missile was used in both attacks. It is a modified version of the American Hellfire missile called the R9X. It does not carry an explosive warhead, but rather uses its 100 pound weight and the speed at which it is traveling to penetrate its target. Then it comes apart, deploying six steel blades that whirl around destroying and killing anything in their path. The nature of the blades allows them to penetrate soft material like cloth and flesh, while leaving the hard exterior of the vehicles nearly unscathed.
Israeli President Benjamin Netanyahu called the strike in Gaza “unintentional” and said the incident will be “thoroughly investigated.” 

The U.S. State Department approved the transfer of more than two thousand bombs to Israel on the day the aid workers were killed in Gaza. A report in the Washington Post said that the weapons included “over 1,000 small diameter bombs.” The modified Hellfire known as the R9X is a little over five feet long and has a diameter of just seven inches, which could put it in the category of the “small diameter bombs” that are approved for transfer to Israel.
Humans are really good at finding interesting ways to do things like killing humans, making species go extinct, wrecking the environment and subverting democracies and replacing them with kleptocratic dictatorships, theocracies and/or plutocracies.

How AI elites spin the promise of AI and how the military can use it

The first video by Jon Stewart pieces together the reasoning and arguments that AI (artificial intelligence software) will be a good thing and there is nothing to worry about. The AI segment starts at about 3:15 into the video. The comedy in it is great. The second video is a news report about how the Israeli military appears to be using an AI program named Lavender to target and kill Hamas fighters along with their families. There is nothing funny in the 2nd video. That video was brought to my attention by PD in his post from yesterday.







Together, these two videos give us a feel for how AI is going to be employed and how dark effects and dark uses of AI will be propagandized and/or hidden to the extent that people in power can spin and hide what is going on. The Israeli government is likely going to either deny the existence of Lavender or that it is used indiscriminately. 

Why post these two videos together?
Because this is important information. People really need to know at least something about how AI is going to be used, whether we like it or not. And, enquiring minds just want to know. 

It is probable that our captured and broken federal government is incapable of dealing with AI responsibly. We will very likely (~97% chance in the next two years?) be mostly left to (1) the whims of people like the cynical and transparently mendacious AI billionaires that Stewart interviewed, and (2) the brutality of authoritarian governments like Israel. 

The WaPo reported about this a couple of days ago:
It’s hard to concoct a more airy sobriquet than this one. A new report published by +972 magazine and Local Call indicates that Israel has allegedly used an AI-powered database to select suspected Hamas and other militant targets in the besieged Gaza Strip. According to the report, the tool, trained by Israeli military data scientists, sifted through a huge trove of surveillance data and other information to generate targets for assassination. It may have played a major role particularly in the early stages of the current war, as Israel conducted relentless waves of airstrikes on the territory, flattening homes and whole neighborhoods. At present count, according to the Gaza Health Ministry, more than 33,000 Palestinians, the majority being women and children, have been killed in the territory.

The AI tool’s name? “Lavender.”   
“During the early stages of the war, the army gave sweeping approval for officers to adopt Lavender’s kill lists, with no requirement to thoroughly check why the machine made those choices or to examine the raw intelligence data on which they were based,” Abraham wrote.
The use of AI technology is still only a small part of what has troubled human rights activists about Israel’s conduct in Gaza. But it points to a darker future. Lavender, observed Adil Haque, an expert on international law at Rutgers University, is “the nightmare of every international humanitarian lawyer come to life.”

Color coded targets that AI can
choose to obliterate

Sunday, April 7, 2024

Israel used military AI Program to select targets in Gaza according to new expose

 (WaPo: 4/7/24)

by Ishaan Tharoor

This week, Israeli journalist and filmmaker Yuval Abraham published a lengthy expose on the existence of the Lavender program and its implementation in the Israeli campaign in Gaza that followed Hamas’s deadly Oct. 7 terrorist strike on southern Israel. Abraham’s reporting — which appeared in +972 magazine, a left-leaning Israeli English-language website, and Local Call, its sister Hebrew-language publication — drew on the testimony of six anonymous Israeli intelligence officers, all of whom served during the war and had “first-hand involvement” with the use of AI to select targets for elimination. According to Abraham, Lavender identified as many as 37,000 Palestinians — and their homes — for assassination. (The IDF denied to the reporter that such a “kill list” exists, and characterized the program as merely a database meant for cross-referencing intelligence sources.) White House national security spokesperson John Kirby told CNN on Thursday that the United States was looking into the media reports on the apparent AI tool.

“During the early stages of the war, the army gave sweeping approval for officers to adopt Lavender’s kill lists, with no requirement to thoroughly check why the machine made those choices or to examine the raw intelligence data on which they were based,” Abraham wrote.

“One source stated that human personnel often served only as a ‘rubber stamp’ for the machine’s decisions, adding that, normally, they would personally devote only about ‘20 seconds’ to each target before authorizing a bombing — just to make sure the Lavender-marked target is male,” he added. “This was despite knowing that the system makes what are regarded as ‘errors’ in approximately 10 percent of cases, and is known to occasionally mark individuals who have merely a loose connection to militant groups, or no connection at all.”

This may help explain the scale of destruction unleashed by Israel across Gaza as it seeks to punish Hamas, as well as the high casualty count. Earlier rounds of Israel-Hamas conflict saw the Israel Defense Forces go about a more protracted, human-driven process of selecting targets based on intelligence and other data. At a moment of profound Israeli anger and trauma in the wake of Hamas’s Oct. 7 attack, Lavender could have helped Israeli commanders come up with a rapid, sweeping program of retribution.

“We were constantly being pressured: ‘Bring us more targets.’ They really shouted at us,” said one intelligence officer, in testimony published by Britain’s Guardian newspaper, which obtained access to the accounts first surfaced by +972.

Many of the munitions Israel dropped on targets allegedly selected by Lavender were “dumb” bombs — heavy, unguided weapons that inflicted significant damage and loss of civilian life. According to Abraham’s reporting, Israeli officials didn’t want to “waste” more expensive precision-guided munitions on the many junior-level Hamas “operatives” identified by the program. And they also showed little squeamishness about dropping those bombs on the buildings where the targets’ families slept, he wrote.

“We were not interested in killing [Hamas] operatives only when they were in a military building or engaged in a military activity,” A, an intelligence officer, told +972 and Local Call. “On the contrary, the IDF bombed them in homes without hesitation, as a first option. It’s much easier to bomb a family’s home. The system is built to look for them in these situations.”

Widespread concerns about Israel’s targeting strategies and methods have been voiced throughout the course of the war. “It is challenging in the best of circumstances to differentiate between valid military targets and civilians” there, Brian Castner, senior crisis adviser and weapons investigator at Amnesty International, told my colleagues in December. “And so just under basic rules of discretion, the Israeli military should be using the most precise weapons that it can that it has available and be using the smallest weapon appropriate for the target.

In reaction to the Lavender revelations, the IDF said in a statement that some of Abraham’s reporting was “baseless” and disputed the characterization of the AI program. It is “not a system, but simply a database whose purpose is to cross-reference intelligence sources, in order to produce up-to-date layers of information on the military operatives of terrorist organizations,” the IDF wrote in a response published in the Guardian.

“The IDF does not use an artificial intelligence system that identifies terrorist operatives or tries to predict whether a person is a terrorist,” it added. “Information systems are merely tools for analysts in the target identification process.”

This week’s incident involving an Israeli drone strike on a convoy of vehicles belonging to World Central Kitchen, a prominent food aid group, killing seven of its workers, sharpened the spotlight on Israel’s conduct of the war. In a phone call with Israeli Prime Minister Benjamin Netanyahu on Thursday, President Biden reportedly called on Israel to change course and take demonstrable steps to better preserve civilian life and enable the flow of aid.

Separately, hundreds of prominent British lawyers and judges submitted a letter to their government, urging a suspension of arms sales to Israel to avert “complicity in grave breaches of international law.”

The use of AI technology is still only a small part of what has troubled human rights activists about Israel’s conduct in Gaza. But it points to a darker future. Lavender, observed Adil Haque, an expert on international law at Rutgers University, is “the nightmare of every international humanitarian lawyer come to life.”

 ______________________________________________________________________

 Yuval Abraham  wrote the following note on +972 Magazine in which he summarizes the impact of his report as of 4/5/24, with lots of links for those who want to pursue the unfolding story in greater depth, tracking the responses of governments, journalists, NGOs etc. I am pasting it below.

I broke a major story two days ago. Here’s what it has done so far

This week, we at +972 Magazine and Local Call published a huge story that I’ve been working on for a long time. Our new investigation reveals that the Israeli army has developed an artificial intelligence-based program called “Lavender,” which has been used to mark tens of thousands of Palestinians as suspected militants for potential assassination during the current Gaza war.

According to six whistleblowers interviewed for the article, Lavender has played a central role in the Israeli army’s unprecedented bombing of Palestinians since October 7, especially during the first weeks of the war. In fact, according to the sources, the army gave sweeping approval for soldiers to adopt Lavender’s kill lists with little oversight, and to treat the outputs of the AI machine “as if it were a human decision.” While the machine was designed to mark “low level” military operatives, it was known to make what were considered identification “errors” in roughly 10 percent of cases. This was accompanied by a systematic preference to strike Lavender-marked targets while they were in their family homes, along with an extremely permissive policy toward casualties, which led to the killings of entire Palestinian families.

At the political level, meanwhile, White House National Security spokesperson John Kirby noted that the United States was examining the contents of our investigation. Palestinian parliamentarian Aida Touma-Suleiman cited sections of our report in a speech at the Knesset. UN Secretary General António Guterres expressed that he was “deeply troubled” by our findings, adding “No part of life and death decisions which impact entire families should be delegated to the cold calculation of algorithms.”

It has been meaningful to see so many readers praising our investigation as one of the most important works of journalism in the war. And we have much more we want to do.

I personally hope that this exposé will help bring us a step closer toward ending this terrible war and confronting the violent systems that enable injustice here in Israel-Palestine. I’m grateful to you for reading our investigation, and for supporting the work that journalists like myself are doing at +972 Magazine

 

_____________________________________________

Here is a link to a video from Democracy Now that covers the story in some depth.