Showing posts with label AI. Show all posts
Showing posts with label AI. Show all posts

Tuesday, May 13, 2025

AI Update

On and off, I've been writing informally about AI here ever since I started this blog. There have been many advances recently, and, though I'm hardly an expert in this field, I think that a new picture is emerging, and this, in conjunction with some of my other observations, merits further discussion. Originally, I was thinking about Ray Kurzweil's ideas on AGI, and, later, I read Luke Dormehl, Max Tegmark and Stuart Russell. With the newer AI services now available, there seem to be actual social changes emerging for the first time. Because of my readings in biology and human evolution, I tend to view AI in that context.

When I observe people, I am now in the habit of examining their evolutionary past, which provides a framework for understanding their operating ideas. Generally, people believe that they are rational, and democratic governments are also based on that idea. The books that I've read by E.O. Wilson, Vinod Goel, Richard Thaler, Robert Plomin, Robert Sapolsky and Daniel Kahneman have convinced me that, to a significant degree, humans are biological automatons, and, rather than being the rational agents that some economists apparently still think we are, we are the product of billions of years of evolution, and our current behavior is governed primarily by our evolutionary past. The problem is that our current environment bears little resemblance to that of our early human ancestors, who lived seventy thousand years ago. While we can collectively adapt to many of the current changes in our environment, we lack the capacity to change our basic human nature, which means to me that some of our behaviors are always going to be roughly equivalent to those of chimpanzees and bonobos. Chimpanzees and bonobos lack our verbal and conceptual abilities, but otherwise they're not much different from us socially.

As Jennifer Rubin says in my recent post, corporate news reporting has come to resemble stenography. I also notice this on PBS NewsHour. There, some of the reporters manage to squeeze in some needed critiques of the political actions that are occurring now. I'm not sure that Amy Goodman is doing much better on Democracy Now. At the moment, PBS NewsHour and Democracy Now have slightly improved their reporting on Donald Trump, because it is readily apparent that he is incompetent in both economic policy and foreign affairs, and his general corruption is impossible to conceal. Even as he attempts to emulate the leading oligarchs, Vladimir Putin, Xi Jinping, Recep Tayyip Erdogan and Benjamin Netanyahu are running circles around him. Domestically, his illegal empowerment of Elon Musk could result in the earth becoming the galactic headquarters for Musk's personal reproductive program. Women beware! Participation may not be optional. For me, this situation has revived my ideas about how AI could be used to counter this foolishness. While there are always risks associated with the general release of AI tools, it is easy to imagine the evolution of its use to answer complex questions in formats similar to Google searches. Originally, I thought of this as a way to help people make informed voting decisions. That still applies. Fifty years ago, Americans would have been astounded to learn that in 2024 the worst president in American history would be reelected for a second term. They couldn't believe their eyes if they saw Donald Trump being glorified on Fox News or his pompous performances with his lackeys at the White House.

Anyway, I think that we're almost at a point where individuals could make AI queries such as "Given my current situation and personal goals, which candidate for president is more likely to meet my needs." Theoretically, this could benefit the entire world by reducing future chances of a second term for foolish buffoons.

A newer aspect of AI has been coming up recently regarding how well humans actually think in comparison to various potential AI configurations. It appears to me that we are already reaching our cognitive limits in how well we can understand both the structure of the universe and the subtleties in the functioning of organisms. I found this video on Gödel's incompleteness theorems rather amusing in that it shows a typical group of pundits demonstrating their ignorance on the topic. In this instance, the pundits generally don't know what they're talking about even though they seem to have the appropriate credentials. Sociologically, they are hardly any different from a group of pundits on Fox & Friends. Similarly, Sabine Hossenfelder suggests that the universe may be beyond our comprehension, and Robert Sapolsky suggests that biological processes may occur in a manner that we can barely comprehend. It is becoming apparent that we need all the help that we can get from AI if it ever lives up to its potential. That help extends far beyond the creation of a new billionaire or trillionaire class.

I still think that the greatest risk associated with AI is that something resembling AGI could fall into the wrong hands. That would roughly be the equivalent of a red button that activates a nuclear war falling into the hands of a chimpanzee.

Friday, January 24, 2020

Human Compatible: Artificial Intelligence and the Problem of Control

I've just finished reading this book (actually, a cheaper advance uncorrected proof) by Stuart Russell, which represents a continuation of my interest in AI as reflected in the books I discussed in 2017 and 2018 by Max Tegmark and Luke Dormehl. Russell is a prominent AI researcher at UC Berkeley, so his perspective is a little different from that of Tegmark, who is a physicist, and Dormehl, who is a journalist. I found Russell more focused on the important issues at hand, but some of the chapters weren't of much interest to me and I consequently spent little time on them. I got the sense that academic specialization has impeded Russell's perspective a little (it usually does), though he did manage to convey some of the urgency more than Tegmark did in his book and much more than Dormehl did in his.

As the title suggests, the ascent of AI poses new problems for humans that are best addressed now rather than later. Russell uses the analogy that if we knew that an asteroid was going to strike the earth in a few years, we wouldn't wait for its arrival to start planning. The main question is how to prevent AI from becoming dangerous to mankind as technology approaches Artificial General Intelligence (AGI), or, more loosely, superintelligence. I am not a programmer and don't care about programming, so I didn't pay much attention to those chapters. However, Russell also traces the history of AI and compares it to developments that have taken place in other scientific fields. He mentions a story about the physicists Ernest Rutherford and Leo Slizard. In 1933, Slizard read an article by Rutherford stating that although it was known that matter contains a great deal of energy, there was no practical way to extract it. The same day, Slizard thought of the idea of a nuclear chain reaction, which led directly to the first man-made, self-sustaining nuclear chain reaction in 1942. Russell thinks that although the path to AGI is not immediately clear, a new idea that makes it possible could suddenly emerge at any time. So much AI research is occurring at the moment that it would not be surprising at all to have AGI appear within a few decades. Russell believes that the main pieces for its development are already in place. The necessary hardware exists and the use of probabilistic, self-learning algorithms is currently producing useful results.

Most of the problems related to AGI that are addressed in this book concern control. Sections are devoted to finding ways of incorporating human values into algorithms. Russell casts a wide net, which I think is far wider than necessary. For example, he considered ideas from economics and philosophy that I don't think are relevant. I liked the fact that he disagreed with Stephen Pinker, who thinks that AI doesn't pose a risk. The main risk is that AGI will be far more intelligent than any human ever could be and that if it has goals that do not appropriately value human well-being, the consequences could be disastrous. The problem consists mainly of incorporating the right human-friendly behavior into the algorithms and enforcing such procedures in national and international agreements. From a formal standpoint, the agreements would be similar to current international agreements on potentially dangerous gene-editing techniques such as CRISPR.

Although Russell covers his bases well and discusses what life might be like if a safe AGI arrives, that is not the main theme of the book. I think that by focusing on the risk to human existence, he doesn't devote enough space to changes that might occur in favorable circumstances. Even while stating that this would be the most significant event in human history, he doesn't speculate much on how we would react to it. He mentions Ray Kurzweil's idea of merging human brains with robotic bodies and expresses some skepticism, as I do. He also touches on some of the social changes that would be likely to occur. Perhaps everyone will spend most of their time in advanced virtual realities. More immediately, it would be a dramatic shock if AGI, in short order, stopped climate change, created a source of unlimited clean energy and replaced most human jobs along with all governments. While on the surface these would represent positive changes, they would also be traumatic. New ways would have to be found for people to occupy their time. Up to now, people have been able to distinguish themselves by working harder, having better ideas or being more productive than their peers, and those human-level talents would pale in comparison to AGI. The very idea of "genius" would instantly become obsolete, because all of the most important ideas that were originated by humans could perhaps be discovered and improved upon in seconds by AGI. How would people increase their social status under such circumstances? How would human interactions be affected by the introduction of realistic androids? Russell repeatedly refers to the gorilla analogy, in which gorillas were once a dominant primate species but became endangered when humans came along: the same may apply to us when AGI arrives.

I am also a little surprised that Russell has little to say about the potential abuse of AGI by humans. It is to be expected that individuals will attempt to control it in order to achieve their personal objectives rather than to support the welfare of mankind. What would happen, for example, if Vladimir Putin took control of an AGI developed in Russia? This would probably be just as disastrous an outcome as the rogue AGI to which Russell devotes so much of the book.

Wednesday, March 28, 2018

Life 3.0: Being Human in the Age of Artificial Intelligence II

The book on the whole provides a scattershot view of the future of AI. Tegmark seems to include snippets of just about everything he knows on the subject. While one does get exposure to many aspects of AI, there is a lack of focus throughout the book, and, in my opinion, Tegmark draws far too much from the wide range of science fiction that he apparently has read. Instead of the multiple scenarios that he brings up, I would have preferred more basic categories, such as 1. Independent superintelligent AI acting benevolently toward humans; 2. Independent superintelligent AI acting maliciously toward humans; 3. Superintelligent AI controlled by humans and acting benevolently toward humans; and 4. Superintelligent AI controlled by humans and acting maliciously toward humans. Since one of the underlying themes of the book is the existential risk associated with AI, I think these would have been a better starting point. He includes many speculative ideas from all sources and organizes them into groups without reaching any definitive conclusions. The book is supposed to be a conversation-starter for those who are interested in the topic, and, as such, leaves each topic too open for my liking. I would have found it more effective it if he had restricted himself to probable scenarios, which would have reduced the length of the book considerably. Some chapters veer off into pie-in-the sky futures that have little likelihood of materializing ever. However, the book warrants attention, since Tegmark is concerned about existential risk and is one of the founders of the Future of Life Institute, which is one of the very few organizations in the world that studies this important topic.

Tegmark says very little about what I think is one of the most likely scenarios: superintelligent AI controlled by some humans and acting maliciously toward other humans. He spends what I consider to be too much time on independent superintelligent AI destroying mankind. Where I seem to differ with him is in my understanding of life. Almost the entire book is framed within the context of goals, whether they are the goals of humans or of superintelligent AI. In my view, goals are a minor aspect of humanity. We are no different from other animals in that we are driven by DNA-encoded behavior which generally leads us to reach adulthood, engage in sex, have children and raise them. Goals do not play a role in this except in the sense that we happen to superimpose an intellectual schema on our behavior, but in reality we would most likely behave exactly the same way without any deliberate plans to raise families. Though it is true that some aspects of modern society, such as the availability of birth control, have changed the landscape a little, in a biological sense we are hardly any different from people who lived hundreds of years ago. Speaking for myself, I have never been goal-oriented, and it seems possible that Tegmark and his cohort, which includes Elon Musk, are goal-driven in the extreme, but are hardly representative of most people. They may also be ascribing their goal hysteria to inanimate objects such as superintelligent AI. In my view, the outcomes that we prefer have no meaning outside the human sphere, and it is folly to think that sophisticated computers would have comparable preferences. We only think that living is good and death is bad because we have a biological imperative, and that imperative would not be shared by superintelligent AI unless it were programmed into it. Being dead or alive makes no difference to non-organisms, and it may be that Tegmark is unwittingly engaging in anthropocentric conceit. Thus, I think that Tegmark is somewhat misguided in not focusing more attention on the possible abuse of superintelligent AI by an individual or group that doesn't represent the interests of mankind as a whole.

I did not find most of the book objectionable, but didn't pay close attention to much of it, because I was not interested in many of the subjects. The only section that I thought was completely incorrect was Tegmark's view on intelligent extraterrestrial life. He proposes an obscure statistical model which indicates a low probability of other intelligent life anywhere in the universe. On this front, I go with more mainstream thinking. If one assumes that there is no magical ingredient to the formation of life, and that the evolutionary processes on earth that led to our existence are not unusual, the obvious procedure is to determine how many sun-like stars there are in the universe and how many of those are likely to possess planetary systems like the solar system. The fact is that our sun isn't unusual, and many stars have planets. Thus, given that there are billions of galaxies that each contain billions of stars, it seems likely that earth-like conditions aren't all that rare. Furthermore, there is no reason to dismiss the possibility that life has emerged on planets orbiting stars unlike the sun. At one point, Tegmark refers to himself as crazy, and here I can see why. Another section that I could have done without is the chapter on consciousness. Tegmark remains neutral on the topic, but I find it mostly irrelevant. I think consciousness is simply a biological feature that amounts to little more than self-awareness. As I've said, there is a continuum between small mammals and humans, and there is not a marked difference between chipmunk-level consciousness and human-level consciousness. For mammals, consciousness seems to be a byproduct of how the brain operates, and, to me, higher consciousness simply refers to more sophisticated brain function. There is no need to think about consciousness in AI, since it would not exist unless self-awareness were programmed into the AI.

In a similar vein, there is what I think of as a conceptual misunderstanding among many AI futurists. They envision futures as immortal cyborgs or digitized people who roam the universe and populate other regions for eternity. It seems to me that they are extrapolating from their current mental states to their future mental states without taking into consideration significant changes that might occur in the process. What if, with superintelligence, they soon know all that they ever can know about the universe: how might this affect their enthusiasm for exploration and discovery? What if, once they have merged with superintelligent entities, immortality suddenly loses its appeal? If they do in fact become immortal, what would the point of reproduction be? I don't think they have taken into consideration the ways in which their current thinking is skewed in a way that it only can be in living organisms, and they are not taking into account how their outlook might change. As I said in an earlier post, it is possible that advanced extraterrestrials that reached superintelligence may have opted for death over life.

One of Tegmark's primary purposes in writing this book and founding the Future of Life Institute has been to increase awareness of the situations that could develop as AI advances. My feeling is that if it advances slowly, in incremental steps, and different groups reach comparable technological levels in unison, it will be possible to enact various safeguards in a manner similar to the safeguards that were adopted in biological weaponry. However, in the event that AI research makes a sudden major advance that is available only to one group, there is a significant chance that all bets will be off the table. In that case, the risk of abuse of power would be significant, and there may not be enough time to enact any safeguards. This kind of thinking is so far from public and political awareness that we can only hope for the extremely slow and coordinated development of AGI in the coming years.

Thursday, March 22, 2018

Life 3.0: Being Human in the Age of Artificial Intelligence I

There aren't many good general interest books on AI, and I have avoided reading the best known one, Superintelligence: Paths, Dangers, Strategies, by Nick Bostrom, because it was written by a philosophy professor, which, in my experience, guarantees that it will contain needless diversions and complications. For the same reason, I have not read Darwin's Dangerous Idea: Evolution and the Meaning of Life, by Daniel Dennett, even though I received it as a gift, am interested in Darwinism, like Daniel Dennett and have attended one of his lectures: he is a philosopher. I thought I would give Life 3.0, by Max Tegmark, a try, since he is a physics professor and seems less likely to inundate the reader with excess baggage. His writing quality is not the best, and he uses gimmicks, such as the title. Life 1.0 includes life forms that are stuck in a stimulus-response mode, in which they react mechanically in all situations; life 2.0 includes life forms that can think and modify their behavior, i.e. humans; life 3.0 includes life forms that can change both their thinking and their physical form. Tegmark refers to thoughts as software and bodies as hardware.

The opening chapter is a science fiction short story set in the near future, in which a tech company assembles a crack team of researchers to work on AI. Their goal is to create artificial general intelligence, or AGI, which entails a machine which is able to perform a wide array of intelligent tasks at least as well as humans. Thus far, AI hardware and software have been able to exceed human capabilities only in narrowly-focused areas and have been incapable of performing a wide range of tasks. The team succeeds in steps, and their AI module, called Prometheus, gradually increases its capabilities. The company immediately decides to use Prometheus to create the maximum profit possible. One of its first potential projects is computer games, in which they could easily dominate the field, but they reject that option because it would provide Prometheus with a way to escape. Gradually they move to other fields and vanquish the competition. They are able to make virtual films that are calibrated to exactly match human preferences, and they soon control the entertainment industry. Often, shell companies are set up to disguise the dominance of the company. From a security standpoint, extreme measures are taken to prevent Prometheus from direct access to the Internet. Because Prometheus is able to consistently create the best products at the lowest cost, non-AGI companies are unable to compete. Then the focus turns to politics, and Prometheus identifies the exact characteristics needed in politicians and how they should be presented if they are to be elected. Over time, the company is so profitable that it is able to absorb costs previously covered by government spending. The need for government services is reduced when the company successfully advocates massive privatization and then absorbs the costs of social services. Because of high efficiency and automation in the economy, there is widespread unemployment, and the company supports those who are unemployed by giving them jobs in community service. Finally, through its economic and technical strength, the company takes over the world.

Although this story isn't nuanced or detailed enough to be fully convincing, I think it does represent a plausible scenario for the future. In fact, the company roughly approximates Amazon.com, which is actively engaged in AI research. It is already noticeable that Amazon.com has expanded into unrelated businesses and is succeeding in them. In previous decades, companies that expanded this way often became unwieldy conglomerates, which eventually led to their breakup into separate companies because of their unmanageability. Even recently, RR Donnelley, the large printing company that I used to work for, was broken up into three companies, based on markets served. So far, Amazon.com is going in the opposite direction, and AI may already be playing a role in its management decisions and strategy. I recently noticed that Amazon.com may be expanding through shell companies. When I began to research pet food in 2016, I came across Reviews.com, which was the only site I could find that reviewed cat food that didn't have an obvious connection to pet food manufacturers. I was a little suspicious, because the recommended brands all had links to Amazon.com, but I didn't think about it much at the time, since the research seemed convincing. I didn't buy any cat food through Amazon.com, because other sites had the same products for less. Recently, I took another look at Reviews.com's cat food recommendations, and they were almost completely different; all of the new brands also had links to Amazon.com. There was no explanation as to why the brands that I had been buying disappeared. In the fine print, it is explained that, while all the endorsed brands are good, some of them are sponsored brands which provide the revenue to run the site. Reviews.com, unsurprisingly, is located in Seattle, where Amazon.com is headquartered. I would guess that nearly all of their research is based on data that is available in the public domain, and that they have very few employees. Their analysis is probably performed with software that other companies do not possess. Reviews.com is probably a cost-effective way for Amazon.com to boost its revenues.

Also, by coincidence, the influence on political campaigns by Cambridge Analytica, which recently came to light, mirrors the use of technology in the story. However, in the case of Cambridge Analytica, wealthy individuals such as Robert Mercer, rather than large corporations, seem to be focused only on political influence. If Mercer helped Donald Trump win the 2016 presidential election, he is unlikely to attain whatever goals he may have had, since Trump obviously was not the right person for the job; he has been unpopular since day one, doesn't seem to know what he's doing, probably won't be reelected and will be lucky if he remains in office until the end of his first term. And it seems unlikely that Cambridge Analytica uses sophisticated AI. More likely, they were able to devise an effective campaign strategy by mining data from Facebook, processing it a little and using well-worn propaganda techniques.

I've still got a long way to go in the book, but it looks as if it covers all of the topics I've brought up before on this blog about AI, so it should be quite informative. I think Tegmark has a genuine concern regarding the effects of AI on human destiny. His science fiction short story is probably not the best way to open a book of serious nonfiction, but it does demonstrate what could happen in a possible future. In that instance, do we want the world to be run by Jeff Bezos? There are other scenarios, in which, say, China, develops AGI first, or perhaps different countries or organizations will develop it simultaneously. Since I think that AGI is likely to be developed, possibly in my lifetime, I don't consider this idle speculation, and I'll have more to say.

Monday, April 10, 2017

Thinking Machines: The Quest for Artificial Intelligence and Where It's Taking Us Next II

In subsequent chapters, Dormehl describes new applications of AI, interviews various researchers and discusses issues that may come up in the future. One of the recent developments has been the arrival of helping software such as Siri from Apple, which acts as a personal assistant. There are several areas in which specific applications of AI have produced human-level results or better. Neural networks have been specifically designed to win at Jeopardy!, conduct drug research, drive cars and design equipment for NASA. We are currently surrounded by data mining on an enormous scale, and it seems as if companies such as Google and Facebook will soon understand people and their needs far better than we understand ourselves.

The most unsatisfactory chapter covers the effects that AI will have on employment. Although it is obvious that it will soon be able perform most tasks better than humans, Dormehl paints a rather naïve scenario in which people are employed either by producing code or by working as artisans and selling their wares on the Internet. Like many young, tech-savvy writers, he glosses over the basic economic problems that are being caused by new technology, particularly with the fact that AI is driving down costs in most industries and many traditional careers are disappearing. If you take the optimistic position, it is possible to envision a utopian future in which AI makes life better for everyone and standards of living generally improve, but Dormehl says nothing about how this major transition would occur and seems blind to the actual political and economic environment in which everyone lives. We are evolving toward a "gig" economy in which few have permanent employment or job benefits, and without significant structural changes most people are en route to lower incomes and little or no job security, which would destabilize society.

For my needs, Dormehl seems to do a fairly good job at distinguishing the types of AI that exist or will exist. First, there is the old number-crunching version that works with brute force through all of the possibilities, such as the early IBM Deep Blue, which defeated Garry Kasparov in chess. Then there is the neural network type that roughly simulates the human brain and processes large amounts of data to arrive at solutions. The former is logical and mainly involves a human-made program processing more data than a person could. The latter finds solutions statistically, without a step-by-step process, and though it can come up with excellent solutions to specific problems, it may be impossible to understand the internal logic of the outcome, which detracts from confidence in its reliability. The next step in AI will be artificial general intelligence, or AGI, in which AI will be able to perform over a wide range of tasks like a human, rather than in the task-specific manner that AI works now. The hypothetical singularity will occur when AGI surpasses human capabilities.

Then there is discussion of extreme futurists such as Ken Hayworth, who says "I absolutely believe that mind uploading is possible and I think it's something we should actively be working toward." Some futurists are obsessed with digitizing themselves and becoming immortal. This doesn't interest me at all: I'd rather die.

The most interesting chapter for me is the one on creativity in AI. It is already starting to occur and brings into question the nature of creativity itself. We are in the early stages of AI producing what is considered original, which had been the exclusive domain of humans. AI can already write rudimentary fiction, paint artistically and design new products. The capabilities of AI in these and other fields are sure to devalue what has been thought of as talent among humans. This is another area in which Dormehl seems oblivious to the effects of advances in AI. It is easy to imagine a future in which the supposed strokes of genius that have occurred throughout history are considered lucky stumbles by feeble brains. In the process of providing deeper insights into the world and new ways of expressing our humanity, AI will deflate a class of accomplishments that we have been using to assign social status among ourselves, because the importance of talent will be diminished once it becomes commonplace.

Dormehl also briefly covers the risks of AI and the moral and legal questions that are surfacing around it. However, the book is primarily a broad survey of the field and doesn't go to any great depth on the issues at hand. Nevertheless, I found it informative and useful for my rudimentary purposes.

Thursday, April 6, 2017

Thinking Machines: The Quest for Artificial Intelligence and Where It's Taking Us Next I

I am now reading this short book by Luke Dormehl to get a better handle on AI. With all the fuss I've made about the subject, I thought I ought to inform myself on it a little better. This is a readable, journalistic history of AI from its earliest days up to the present, with some speculation about the future at the end. As far as I've read, it has covered the early years of computational logic under Marvin Minsky and others, which proved to be far less fruitful than people had hoped. Now I am moving into neural networks and deep learning, which have transformed the field into its current state. It shouldn't take me long to finish the rest of the book, and I will probably have more to say about the later chapters.

My interest in AI is not at all technical and has more to do with the sociological and philosophical changes that it is precipitating. There are still skeptics around, but I think we are well on the way to superintelligence, and there is already a certain pressure to reevaluate our collective self-conceptions and modes of living. If you have a materialistic view of the world, there is no magic ingredient to human beings that can't be replicated and magnified or reworked into a more effective form. But even without superintelligence, radical changes are occurring, because businesses are requiring fewer and fewer employees with the new technologies available. Developed countries are going to have to rethink their public policies whether they like it or not, because unemployment is slowly becoming the norm. Those who point to formulas of the past, such as boosting economic growth to increase household incomes, are toying with concepts that are nearly obsolete and have no chance of solving the social problems to come. In particular, the American model of working hard and getting ahead financially is increasingly untenable for the majority of workers, because their skills simply are not needed. It seems to me that as the demand for human labor declines, sinecures, basic income, or perhaps even the elimination of currency will replace the current model. At the policy level, little is being done in preparation now, because the political system is reactive to the immediate perceptions of voters who have no idea what is in store for them.

Another aspect of AI, which, fortunately, is being examined at the Centre for the Study of Existential Risk, is that it may result in unexpected disasters unless it is controlled properly. Even if the intentions of AI developers are good, AI may go awry or it may fall into the wrong hands. At this point I am less worried about it going awry than about it falling into the wrong hands. The wrong hands could be those of anyone from amoral technocrats to egomaniacs to religious fundamentalists, the latter including both Islamic terrorists and Christians. This technology is becoming powerful, and power has inevitably been abused throughout history.

Perhaps it is the philosophical aspects of AI that interest me the most. As I've said, we're not as smart as we think we are, but we've never had to deal with anything that clearly exceeds our intellectual capabilities. I expect there to be a series of shocks and rude awakenings that may change how we think about ourselves and our relationship to the universe. One of the reasons why I like the work of E.O. Wilson is that he was the first scientist to suggest that humans are eusocial creatures, like ants. This is simply an extension of Darwinism that, to me, provides the best framework for understanding our moral tendencies. AI researchers are currently a little stumped by the problem of making AI people-friendly, and that seems natural, because AI did not come into existence through a biological, evolutionary process in which morality became a key ingredient of survival. In fact, AI has no survival, reproductive or moral imperatives at all unless we build them into it. What we are about to find out is that most, if not all, of the "values" that we hold dear are mere evolutionary accidents that steered our behavior in a direction that allowed our species to survive up to the present. AI will not inherently possess any superstitions and will not be able to understand ours the way we do. I am wondering whether we will be able to understand the thinking processes of autonomous AI, because ultimately it will be self-teaching and will use methods that it develops on its own. I also think that there will be limitations to interfacing humans with AI, because our little brains have limited capacities. Eventually, assuming no disasters occur, AI will become the new God, but without the religious mumbo jumbo. My preference would be for it to become the keeper of our habitat, and I have no desire to expand the capabilities of my brain or to become immortal. That, in effect, would be death, because I would no longer be who I am now.

Monday, May 5, 2014

The Singularity

I'm about a third of the way through Capital and will eventually make at least one more post on it. Today I'll write about the singularity.

In earlier posts I suggested that government could be automated and that capitalism could be brought to an end if the right technology existed and were put into effect. This view generally fits within a gradualist framework under which new technology becomes incorporated into our lives without any major shocks, setbacks or unexpected turns of events. Alternatively, there are scenarios in which technological developments could suddenly cause radical shifts either for better or for worse.

The singularity, if you haven't heard of it, is the theoretical moment when artificial intelligence surpasses human intelligence, with unpredictable effects that could change Homo sapiens forever. Some proponents, such as inventor Ray Kurzweil, take this seriously and are planning their lives accordingly. Kurzweil is hoping to remain healthy and live as long as possible in order to benefit from coming technology that will make him immortal. Some experts think that Kurzweil is unrealistic regarding the biology of human longevity and the capabilities of technology. Others, such as physicist Max Tegmark, are more cautious regarding the singularity. His approach is that we don't know for certain that it will occur or what would happen if it did. Tegmark suggests that we discuss it ahead of time rather than wait and see.

I am not well-informed on artificial intelligence or biology, but I think a singularity is likely to occur, though I'm not sure when or what the results will be. Since I don't believe that there is anything special about human intelligence, I see no reason why supercomputers couldn't dramatically outthink humans in the not-too-distant future. They are already better at chess and Jeopardy. Certainly computers could have much larger memories and far greater processing capabilities than any human. Once they can be taught to learn, which does not seem to be an insurmountable task and already goes on at a rudimentary level, why couldn't they outperform us?

Among the positive potential outcomes, humans might live much as they do now, but without having to work, and with increased longevity. There could be a benign merger between humans and machines that would create a new species without eradicating what we now think of as human nature. Conflicts might be resolved peaceably, the ecosystem could be managed better, and in theory everyone could be happy.

One negative outcome would be an uncontrolled rampage by supercomputers that don't act in the interests of humans. This has been a subject of science fiction for many years. It could probably be prevented but would require advance planning.

Another negative outcome, and perhaps more likely, would be the use of supercomputers to benefit one group of people but not others. Under this scenario, a small group of wealthy technocrats might rule the world, neutralizing or eliminating their opponents and accelerating their own evolution while excluding others. Or this could occur at a national level, in which case the supercomputers would simply represent the most advanced weaponry.

At present this may all appear too speculative, but I think one of these outcomes is possible. Keep in mind that the type of supercomputer I'm talking about here might be capable of making improved versions of itself, anticipating all human behavior, developing new energy sources that we have been unable to, designing and making weapons beyond our comprehension and obviating the need for human labor of any description. It might even write better novels, short stories, poems - and blog posts - than humans.