Etiquette



DP Etiquette

First rule: Don't be a jackass.

Other rules: Do not attack or insult people you disagree with. Engage with facts, logic and beliefs. Out of respect for others, please provide some sources for the facts and truths you rely on if you are asked for that. If emotion is getting out of hand, get it back in hand. To limit dehumanizing people, don't call people or whole groups of people disrespectful names, e.g., stupid, dumb or liar. Insulting people is counterproductive to rational discussion. Insult makes people angry and defensive. All points of view are welcome, right, center, left and elsewhere. Just disagree, but don't be belligerent or reject inconvenient facts, truths or defensible reasoning.

Tuesday, May 2, 2023

Science chunks: Advances in AI mind reading; A warning about AI

The NYT writes about current state of the art in AI (artificial intelligence software) reading human minds. When I mentioned the existence of mind reading research to a small group of people a few weeks ago, the universal reaction was that I was full of baloney and/or fibbing because such a thing is impossible. They flat out denied that mind reading could ever be possible. They were wrong and not otherwise convincible. The NYT writes:
A.I. Is Getting Better at Mind-Reading

In a recent experiment, researchers used large language models to translate brain activity into words

Person's thoughts ---------------------- AI translating thoughts

On Monday, scientists from the University of Texas, Austin, made another step in [toward mind reading by machines]. In a study published in the journal Nature Neuroscience, the researchers described an A.I. that could translate the private thoughts of human subjects by analyzing fMRI [functional magnetic resonance imaging] scans, which measure the flow of blood to different regions in the brain.

Scientists recorded M.R.I. data from three participants as they listened to 16 hours of narrative stories to train the model to map between brain activity and semantic features that captured the meanings of certain phrases and the associated brain response. .... The researchers used a large language model to match patterns in brain activity to the words and phrases that the participants had heard.

Already, researchers have developed language-decoding methods to pick up the attempted speech of people who have lost the ability to speak, and to allow paralyzed people to write while just thinking of writing. But the new language decoder is one of the first to not rely on implants. In the study, it was able to turn a person’s imagined speech into actual speech and, when subjects were shown silent films, it could generate relatively accurate descriptions of what was happening onscreen.

Large language models like OpenAI’s GPT-4 and Google’s Bard are trained on vast amounts of writing to predict the next word in a sentence or phrase. In the process, the models create maps indicating how words relate to one another. A few years ago, Dr. Huth noticed that particular pieces of these maps — so-called context embeddings, which capture the semantic features, or meanings, of phrases — could be used to predict how the brain lights up in response to language.

In a basic sense, said Shinji Nishimoto, a neuroscientist at Osaka University who was not involved in the research, “brain activity is a kind of encrypted signal, and language models provide ways to decipher it.”  
In their study, Dr. Huth and his colleagues effectively reversed the process, using another A.I. to translate the participant’s fMRI images into words and phrases. The researchers tested the decoder by having the participants listen to new recordings, then seeing how closely the translation matched the actual transcript.

Almost every word was out of place in the decoded script, but the meaning of the passage was regularly preserved. Essentially, the decoders were paraphrasing.
Original transcript: “I got up from the air mattress and pressed my face against the glass of the bedroom window expecting to see eyes staring back at me but instead only finding darkness.” 
Decoded from brain activity: “I just continued to walk up to the window and open the glass I stood on my toes and peered out I didn’t see anything and looked up again I saw nothing.”
What might be an inherent limit to how well AI can translate thoughts? Maybe diversity in how human brains think about various concepts (discussed here a couple of days ago). Science seems to be getting fairly close, e.g., within ~20 years, to a knowledge point where fundamental barriers to deciphering minds will apparently exist or not.  

________________________________________________________________
________________________________________________________________


The NYT writes about an expert warning about the potential for AI software to hurt people:

AI expert Dr. Geoffrey Hinton
‘The Godfather of A.I.’ Leaves Google and Warns of Danger Ahead

For half a century, Geoffrey Hinton nurtured the technology at the heart of chatbots like ChatGPT. Now he worries it will cause serious harm.

Geoffrey Hinton was an artificial intelligence pioneer. In 2012, Dr. Hinton and two of his graduate students at the University of Toronto created technology that became the intellectual foundation for the A.I. systems that the tech industry’s biggest companies believe is a key to their future.

On Monday, however, he officially joined a growing chorus of critics who say those companies are racing toward danger with their aggressive campaign to create products based on generative artificial intelligence, the technology that powers popular chatbots like ChatGPT.

Dr. Hinton said he has quit his job at Google, where he has worked for more than a decade and became one of the most respected voices in the field, so he can freely speak out about the risks of A.I. A part of him, he said, now regrets his life’s work.

“I console myself with the normal excuse: If I hadn’t done it, somebody else would have,” Dr. Hinton said during a lengthy interview last week in the dining room of his home in Toronto, a short walk from where he and his students made their breakthrough.
As Larry Motuz commented a day or two ago, humans go where Angels fear. That seems to be about right. As a species, humans sometimes (not always) charge ahead with things and then either try to react to and soften adverse consequences or just let the bad stuff just play itself out.


Dave: Oops!

No comments:

Post a Comment