Etiquette



DP Etiquette

First rule: Don't be a jackass.

Other rules: Do not attack or insult people you disagree with. Engage with facts, logic and beliefs. Out of respect for others, please provide some sources for the facts and truths you rely on if you are asked for that. If emotion is getting out of hand, get it back in hand. To limit dehumanizing people, don't call people or whole groups of people disrespectful names, e.g., stupid, dumb or liar. Insulting people is counterproductive to rational discussion. Insult makes people angry and defensive. All points of view are welcome, right, center, left and elsewhere. Just disagree, but don't be belligerent or reject inconvenient facts, truths or defensible reasoning.

Saturday, December 21, 2024

Science bits; Bird flu; Fun with OAEs; Extinctions science; Fun with AI -- getting more like brains

Baby capybara & mom





How the U.S. Lost Control of Bird Flu, 
Setting the Stage for Another Pandemic
As the bird flu virus moved into cows and people, sluggish federal action, deference to industry and neglect for worker safety put the country at risk

Experts say they have lost faith in the government’s ability to contain the outbreak.

“We are in a terrible situation and going into a worse situation," said Angela Rasmussen, a virologist at the University of Saskatchewan in Canada. “I don’t know if the bird flu will become a pandemic, but if it does, we are screwed.”

This investigation revealed key problems, including deference to the farm industry, eroded public health budgets, neglect for the safety of agriculture workers, and the sluggish pace of federal interventions.

Case in point: The U.S. Department of Agriculture this month announced a federal order to test milk nationwide. Researchers welcomed the news but said it should have happened months ago — before the virus was so entrenched.
But, not to worry, DJT has a plan to deal with another pandemic. He's gonna let its spread through the population so we all get herd immunity. See, easy peasy, fun and squeazy.

Well, all  of us who survive will get herd immunity, maybe. 👍 The dead ones will be dead. 👎
_________________________________________________________________
_________________________________________________________________

Cosmos Magazine writes about OAEs, which have caused mass extinctions in the past:
The Ocean Anoxic Event 1a (OAE 1a) was one of the largest disruptions of the global carbon cycle and climate system in the past 200 million years.

New research published in Science Advances has determined that it was triggered 119.5 million years ago and lasted for 1.1 million years, during the Early Cretaceous Period.

“Ocean anoxic events occur in part as a consequence of climatic warming in a greenhouse world,” says Brad Sageman, senior author of the study from Northwestern University in the US.
As usual, that raised a question or two. I asked Dr. Perplexity the obvious one:

Q: How close are we to repeating something like Ocean Anoxic Event 1a (OAE 1a) due to carbon dioxide (CO2) increases and global warming?

A: .... long answer .... In summary, while we are not currently experiencing an event like OAE 1a, the current trajectory of CO2 emissions and the resulting increase in atmospheric CO2 concentrations are moving us closer to conditions that could potentially trigger anoxic events in the oceans. The rate of CO2 rise is faster than what is required to meet the Paris Agreement's 1.5°C warming limit, indicating a need for more aggressive mitigation efforts to prevent such an outcome.
  
That was not a satisfying answer. In response to Q2, A2 was equally unsatisfying.
Q3: At the current rate of CO2 increase in the air, how long would it take to reach a CO2 level that could cause an OAE?
A3: Given the current atmospheric CO2 concentration of 422.5 ppm in 2024, and assuming the rate of increase remains constant at 2.84 ppm per year:

Time to reach 700 ppm ​is ≈ 98 years

However, this calculation assumes a linear increase in CO2 levels, which is an oversimplification. The actual trajectory could be influenced by various factors.
So, to be conservative let's assume things get worse faster than projected because that's what has been happening too often for comfort in the last ~20 years or thereabouts. That means we might get oceanic anoxia and a mass extinction in ~50 years. One can only wonder how close to 700 ppm humanity is willing to get. Guess we're gonna find out.
_________________________________________________________________
_________________________________________________________________


Climate change extinctions
We can expect, with increased certainty, that rising temperatures will lead to an increasing number of extinctions, with the highest emission scenario leading to extinction of nearly a third of the Earth’s species, especially those from particular vulnerable taxa or regions.

Climate change is expected to cause irreversible changes to biodiversity, but predicting those risks remains uncertain. I synthesized 485 studies and more than 5 million projections to produce a quantitative global assessment of climate change extinctions. With increased certainty, this meta-analysis suggests that extinctions will accelerate rapidly if global temperatures exceed 1.5°C. The highest-emission scenario would threaten approximately one-third of species, globally. Amphibians; species from mountain, island, and freshwater ecosystems; and species inhabiting South America, Australia, and New Zealand face the greatest threats. In line with predictions, climate change has contributed to an increasing proportion of observed global extinctions since 1970. Besides limiting greenhouse gases, pinpointing which species to protect first will be critical for preserving biodiversity until anthropogenic climate change is halted and reversed.
_________________________________________________________________
_________________________________________________________________

Artificial intelligence (AI) systems tend to take on human biases and amplify them, causing people who use that AI to become more biased themselves, finds a new study by UCL researchers.

Human and AI biases can consequently create a feedback loop, with small initial biases increasing the risk of human error, according to the findings published in Nature Human Behavior.

"Here, we've found that people interacting with biased AI systems can then become even more biased themselves, creating a potential snowball effect wherein minute biases in original datasets become amplified by the AI, which increases the biases of the person using the AI."

LLMs are becoming more brain-like as they advance, researchers discover
Large language models (LLMs), the most renowned of which is ChatGPT, have become increasingly better at processing and generating human language over the past few years.

Researchers at Columbia University and Feinstein Institutes for Medical Research Northwell Health recently carried out a study investigating the similarities between LLM representations on neural responses. Their findings, published in Nature Machine Intelligence, suggest that as LLMs become more advanced, they do not only perform better, but they also become more brain-like. 

"To estimate the similarity between these models and the brain, we tried to predict the recorded neural responses to words from the word embeddings. The ability to predict the brain responses from the word embeddings gives us a sense of how similar the two are."

"First, we found that as LLMs get more powerful (for example, as they get better at answering questions like ChatGPT), their embeddings become more similar to the brain's neural responses to language," said Mischler.

"More surprisingly, as LLM performance increases, their alignment with the brain's hierarchy also increases. This means that the amount and type of information extracted over successive brain regions during language processing aligns better with the information extracted by successive layers of the highest-performing LLMs than it does with low-performing LLMs."

The results gathered by this team of researchers suggest that the best performing LLMs mirror brain responses associated with language processing more closely. Moreover, their better performance appears to be due to the greater efficiency of their earlier layers.

LLMs, like the human brain, develop internal representations of words known as embeddings. These embeddings capture semantic and syntactic relationships between words. As LLMs become more powerful, their embeddings increasingly resemble the neural responses recorded from the human brain when processing language.

The study found that the layers of LLMs correspond more closely to the hierarchical processing of language in the brain. In the brain, language processing involves a gradual build-up of representations from acoustic to phonetic and eventually to more abstract components. Similarly, the layers of high-performing LLMs extract information in a manner that aligns with this brain hierarchy.

The research suggests that the better performance of advanced LLMs is partly due to the greater efficiency of their earlier layers. These layers are crucial in capturing the foundational aspects of language, much like the initial stages of language processing in the brain.

Hm, something I do not understand is going on here. 😕

No comments:

Post a Comment