Tuesday, May 13, 2025

AI Update

On and off, I've been writing informally about AI here ever since I started this blog. There have been many advances recently, and, though I'm hardly an expert in this field, I think that a new picture is emerging, and this, in conjunction with some of my other observations, merits further discussion. Originally, I was thinking about Ray Kurzweil's ideas on AGI, and, later, I read Luke Dormehl, Max Tegmark and Stuart Russell. With the newer AI services now available, there seem to be actual social changes emerging for the first time. Because of my readings in biology and human evolution, I tend to view AI in that context.

When I observe people, I am now in the habit of examining their evolutionary past, which provides a framework for understanding their operating ideas. Generally, people believe that they are rational, and democratic governments are also based on that idea. The books that I've read by E.O. Wilson, Vinod Goel, Richard Thaler, Robert Plomin, Robert Sapolsky and Daniel Kahneman have convinced me that, to a significant degree, humans are biological automatons, and, rather than being the rational agents that some economists apparently still think we are, we are the product of billions of years of evolution, and our current behavior is governed primarily by our evolutionary past. The problem is that our current environment bears little resemblance to that of our early human ancestors, who lived seventy thousand years ago. While we can collectively adapt to many of the current changes in our environment, we lack the capacity to change our basic human nature, which means to me that some of our behaviors are always going to be roughly equivalent to those of chimpanzees and bonobos. Chimpanzees and bonobos lack our verbal and conceptual abilities, but otherwise they're not much different from us socially.

As Jennifer Rubin says in my recent post, corporate news reporting has come to resemble stenography. I also notice this on PBS NewsHour. There, some of the reporters manage to squeeze in some needed critiques of the political actions that are occurring now. I'm not sure that Amy Goodman is doing much better on Democracy Now. At the moment, PBS NewsHour and Democracy Now have slightly improved their reporting on Donald Trump, because it is readily apparent that he is incompetent in both economic policy and foreign affairs, and his general corruption is impossible to conceal. Even as he attempts to emulate the leading oligarchs, Vladimir Putin, Xi Jinping, Recep Tayyip Erdogan and Benjamin Netanyahu are running circles around him. Domestically, his illegal empowerment of Elon Musk could result in the earth becoming the galactic headquarters for Musk's personal reproductive program. Women beware! Participation may not be optional. For me, this situation has revived my ideas about how AI could be used to counter this foolishness. While there are always risks associated with the general release of AI tools, it is easy to imagine the evolution of its use to answer complex questions in formats similar to Google searches. Originally, I thought of this as a way to help people make informed voting decisions. That still applies. Fifty years ago, Americans would have been astounded to learn that in 2024 the worst president in American history would be reelected for a second term. They couldn't believe their eyes if they saw Donald Trump being glorified on Fox News or his pompous performances with his lackeys at the White House.

Anyway, I think that we're almost at a point where individuals could make AI queries such as "Given my current situation and personal goals, which candidate for president is more likely to meet my needs." Theoretically, this could benefit the entire world by reducing future chances of a second term for foolish buffoons.

A newer aspect of AI has been coming up recently regarding how well humans actually think in comparison to various potential AI configurations. It appears to me that we are already reaching our cognitive limits in how well we can understand both the structure of the universe and the subtleties in the functioning of organisms. I found this video on Gödel's incompleteness theorems rather amusing in that it shows a typical group of pundits demonstrating their ignorance on the topic. In this instance, the pundits generally don't know what they're talking about even though they seem to have the appropriate credentials. Sociologically, they are hardly any different from a group of pundits on Fox & Friends. Similarly, Sabine Hossenfelder suggests that the universe may be beyond our comprehension, and Robert Sapolsky suggests that biological processes may occur in a manner that we can barely comprehend. It is becoming apparent that we need all the help that we can get from AI if it ever lives up to its potential. That help extends far beyond the creation of a new billionaire or trillionaire class.

I still think that the greatest risk associated with AI is that something resembling AGI could fall into the wrong hands. That would roughly be the equivalent of a red button that activates a nuclear war falling into the hands of a chimpanzee.

No comments:

Post a Comment

Comments are moderated in order to remove spam.