Etiquette



DP Etiquette

First rule: Don't be a jackass.

Other rules: Do not attack or insult people you disagree with. Engage with facts, logic and beliefs. Out of respect for others, please provide some sources for the facts and truths you rely on if you are asked for that. If emotion is getting out of hand, get it back in hand. To limit dehumanizing people, don't call people or whole groups of people disrespectful names, e.g., stupid, dumb or liar. Insulting people is counterproductive to rational discussion. Insult makes people angry and defensive. All points of view are welcome, right, center, left and elsewhere. Just disagree, but don't be belligerent or reject inconvenient facts, truths or defensible reasoning.

Thursday, May 18, 2023

Religious accommodations in the workplace; Trying to pin consciousness down

A cherished Christian nationalist goal is shifting all costs of religious practice and religious education from the secular law to Christian Sharia. By doing so, the Christian religion would (i) save hundreds of billions/year in worship and child education costs (on top of the generous tens of billions/year in tax breaks that religion enjoys at everyone's expense), and, (ii) shift serious power and wealth to the Christian religion and Christian theocracy at the expense of secularism, civil liberties and democracy. The Conversation writes about an impending major step toward that cherished power and wealth goal:
Co-workers could bear costs of accommodating religious employees in the workplace if Supreme Court tosses out 46-year-old precedent

The Supreme Court may soon transform the role of faith in the workplace, which could have the effect of elevating the rights of religious workers at the expense of co-workers.

On April 18, 2023, the court heard oral arguments in Groff v. DeJoy, a case addressing an employer’s obligation to accommodate religious employees’ requests under federal law.

After listening to the oral arguments in the case, I believe it’s very likely the court will overturn the de minimis standard and require employers to accommodate more religious requests. As Justice Gorsuch stated, “I think there’s common ground that de minimis can’t be the test, in isolation at least, because Congress doesn’t pass civil rights legislation to have de minimis effect, right?”

In my view, as a scholar of employment discrimination, the only questions are how far the justices will go – and who will ultimately pay the price.

Employers are required to accommodate the religious needs of employees under Title VII of the Civil Rights Act of 1964, so long as they can do so without imposing an “undue hardship.”

Congress didn’t define what that term meant, and it took another dozen years for the U.S. Supreme Court to do so in Trans World Airlines v. Haridson. The court determined that Title VII does not require employers to bear more than a “de minimis” or minimal cost in accommodating religious employees.

Relying on this narrow decision, employees requesting religious accommodation in the workplace have generally fared poorly in the courts. Supporters of more religious accommodation in the workplace have tried many times to amend Title VII to redefine undue hardship as a “significant difficulty or expense.”
______________________________________________________________________
______________________________________________________________________

Question: There is a book, nine eggs, a laptop, a bottle and a nail. Please tell me how to stack them onto each other in a stable manner.

Answer: Arrange the eggs in three rows with space between them. Make sure you don’t crack them. Place the laptop on top of the eggs, with the screen facing down and the keyboard facing up. The laptop will fit snugly within the boundaries of the book and the eggs, and its flat and rigid surface will provide a stable platform for the next layer.

Is that consciousness? The answer came from the current generation of AGI (artificial general intelligence). As time passes and AGI gets better and better at mimicking human consciousness or mind, it will become increasingly difficult to distinguish the machine from the human. Humans being what they are, some will see consciousness or a mind in the machine. Humans have a tendency to anthropomorphize, i.e., attribute human traits to non-human things, e.g., Gods, robots, sex dolls, cars, dogs, trees, etc.

A search of advanced scholar for the phrase "human tendency anthropomorphize" gives 26,700 hits in the research literature. One of those hits, a 2018 research paper comments
At the core of anthropomorphism lies a false positive cognitive bias to over-attribute the pattern of the human body and/or mind. Anthropomorphism is independently discussed in various disciplines, is presumed to have deep biological roots, but its cognitive bases are rarely explored in an integrative way. .... The search for pertinent pattern is the world is ubiquitous among animals, is one of the main brain tasks and is crucial for survival and reproduction. However, it leads to the occurrence of false positives, known as patternicity: the general tendency to find meaningful/familiar patterns in meaningless noise or suggestive cluster (Shermer, 2008). Patternicity can be visual, auditory, tactile, olfactory, gustatory or purely psychological.
The researchers included Dr. Bubeck, a 38-year-old French expatriate and former Princeton University professor. One of the first things he and his colleagues did was ask GPT-4 to write a mathematical proof showing that there were infinite prime numbers and do it in a way that rhymed.

The technology’s poetic proof was so impressive — both mathematically and linguistically — that he found it hard to understand what he was chatting with. “At that point, I was like: What is going on?” he said in March during a seminar at the Massachusetts Institute of Technology.

For several months, he and his colleagues documented complex behavior exhibited by the system and believed it demonstrated a “deep and flexible understanding” of human concepts and skills. 
The technology’s poetic proof was so impressive — both mathematically and linguistically — that he found it hard to understand what he was chatting with. “At that point, I was like: What is going on?” he said in March during a seminar at the Massachusetts Institute of Technology.

For several months, he and his colleagues documented complex behavior exhibited by the system and believed it demonstrated a “deep and flexible understanding” of human concepts and skills.  
“All of the things I thought it wouldn’t be able to do? It was certainly able to do many of them — if not most of them,” Dr. Bubeck said.

When people use GPT-4, they are “amazed at its ability to generate text,” Dr. Lee said. “But it turns out to be way better at analyzing and synthesizing and evaluating and judging text than generating it.”
One can see the anthropomorphizing in Dr. Bubeck's comment that the AGI has a “deep and flexible understanding” of human concepts and skills. AGI “understands” nothing. It is a machine that has unpredicted emergent properties that look like a human brain-mind or consciousness, but it isn't conscious. 

This is probably going to cause a lot of mischief -- lots of people will be fooled, manipulated, ripped off and/or betrayed. This exemplifies the problem:



An autoregressive language model is a type of machine learning model that uses autoregressive statistical analysis techniques to predict the next word in a sequence of words based on the words that have come before it. The model is used for tasks such as natural language processing and machine translation. Ask yourself, how in hell did statistical language analysis come up with that Socrates-Aristotle dialog?

A major issue here is that experts have not yet figured out a way(s) to distinguish machine-software output from human brain-mind output. As far as I know, experts still have not figured out a good way to measure human consciousness other than by seeing electrical signals the conscious brain generates. That's primitive. Other than by directly interacting face-to-face, there may be no reliable way to distinguish a human brain-mind from a non-conscious machine-software AGI “mind.” 

Finally, there's this bit of unsettling news from the NYT -- to win, capitalism races to the bottom while our broken congress fiddles, piddles, diddles and dithers while we all get scorched: 
In February, Meta made an unusual move in the rapidly evolving world of artificial intelligence: It decided to give away its A.I. crown jewels.

The Silicon Valley giant, which owns Facebook, Instagram and WhatsApp, had created an A.I. technology, called LLaMA, that can power online chatbots. But instead of keeping the technology to itself, Meta released the system’s underlying computer code into the wild. Academics, government researchers and others who gave their email address to Meta could download the code once the company had vetted the individual.

Essentially, Meta was giving its A.I. technology away as open-source software — computer code that can be freely copied, modified and reused — providing outsiders with everything they needed to quickly build chatbots of their own.

“The platform that will win will be the open one,” Yann LeCun, Meta’s chief A.I. scientist, said in an interview. 
As a race to lead A.I. heats up across Silicon Valley, Meta is standing out from its rivals by taking a different approach to the technology. Driven by its founder and chief executive, Mark Zuckerberg, Meta believes that the smartest thing to do is share its underlying A.I. engines as a way to spread its influence and ultimately move faster toward the future.
Humans really do sometimes blithely go where angels fear. Let's all hope that Zuckerberg is right and things will turn out just fine and hunky dory. Meanwhile, I'm just gonna buy a scimitar and prepare for what's probably coming:



A scimitar model with a scimitar

No comments:

Post a Comment