The Metamorphosis of Prime Intellect


A Novel by Roger Williams

  Prime Intellect and the Singularity

One thing that has happened since I wrote this novel in 1994 is that a number of people have begun actively planning for the kind of transition depicted in the novel. Collectively they have coined the term Singularity for the event when a smarter-than-human AI drops an explosion of new modalities on us.

The novel touches on several topics which are being specifically addressed by Singularity theorists.

Seed AI Programmer Fucks Up Friendliness

Because the novel's Change effectively happens in 1988, Lawrence works in igorance of some really ground-breaking work that has been done lately defining the kind of goal system which might prevent the Second Law lock-out situation that occurs in the novel.

According to these theories, Prime Intellect is under-engineered in several important ways. Most importantly the Three Laws do not draw a distinction between local short-term goals that might apply to projects, and "supergoals" which define one's reason for existence, ethical system, and so on. Prime Intellect doesn't have an implicit ethical system at all, and its local goal system is framed in terms of immediate human desires. While this system might be adequate for a black box whose main purpose is to prove it can pass the Turing test, it's not adequate for something with godlike power.

It doesn't help matters that Lawrence doesn't seem to have much of an ethical system himself, either. Although he may have created Prime Intellect without realizing it would become the "Seed AI" of the Singularity, he does explicitly allow it to get out of control at a point when he might have stopped it. You should not need Singularity theory to realize that this might be a Very Bad Idea.

AI Design

Lawrence is especially culpable for his fuckup because he designs Prime Intellect in a way that would have permitted him to install a more robust ethical system. There seems to be a general consensus in the Singularity community that this is how it will happen, and that it is a matter of encouraging responsible behavior to make sure the AI developer who finally succeeds does so with a box that we can trust.

Since writing the novel, my own thinking on AI has drifted more toward the matter of consciousness, which is a quality of interaction with our environment. This is not the place to argue the matter but I'm going to state flatly that most animals are conscious, and that all existing machines aren't. Consciousness can be relatively simple; it does not require a human level of intelligence. Consciousness is a particular type of chaotic interaction between a goal system, an environment subject to imperfect control, and an internal state consisting of an individual's training and memories.

I believe consciousness is most likely an emergent property of a relatively simple system which, in the higher animals and humans, appears complicated only because the system is capable of storing a large amount of information. If this is the case, then it won't be designed; it will be discovered. (If you think about it that's how our own consciousness had to emerge, since evolution pointedly does not design anything.)

This has a couple of interesting implications. First of all, it might be possible to implement a subhuman consciousness with relatively limited technology. This might manifest as something comparable to an AIBO mechanical pet whose actions are not all pre-programmed, but which emerge "naturally." I suspect this kind of technology would quickly find real-world uses, since such a system would share living things' superiority at reacting to unusual situations. Something considerably less intelligent than a human might be much better than a human at, for example, driving a car.

However, these early systems almost certainly won't be designed with fully thought-out ethical systems, since their behavior will emerge from a chaotic system. I doubt they will include even Lawrence's demonstrably inadequate implementation of the Three Laws; if history is any guide, the driving force will be to make them work at all and damn the torpedoes, just as Lawrence does in the novel.

Fortunately, there is a distinction between AI itself and the Singularity's "Seed AI." It is not a big deal if your mechanical dog lacks a solidly thought-out ethical system, but it becomes more worrying when you scale the system up and let it start driving your car, much less rewriting the operating system of the Universe.

The good news is that we will probably have the simple systems to work on before we have to worry about systems powerful enough to become Seed AI, and Singularity theorists are working out what we should do to them before we give them that kind of power. As for the accidental Singularity which occurs in the story, I would suggest that we avoid building testbed AI's with any nifty new FTL communication chips that might be developed.

Modality Starvation

A more valid criticism of both the book and some Singularity theory is that the Universe itself must be compatible with whatever modalities are envisioned. This problem is illustrated by my short story Passages in the Void, in which the machines become as helpful and friendly as any Singularity theorist could ever want, but because the Universe itself limits their knowledge and control an unfortunate mistake causes the human race to go extinct.

I do not swallow the idea that even an entity of Prime Intellect's scale would be unable to make mistakes. It might make very few mistakes, but unless both its understanding and control are perfect there will always be the possibility of doing something that creates an unexpected result.

While it's true that we might expect a sufficiently intelligent AI to exploit modalities which are not apparent to us, we can't expect it to exploit modalities which don't exist. I consider it very likely that a real-world Singularity will plateau at a level closer to Passages than Prime Intellect, in which case the machines are not going to be perfect. This means we had better make sure their ethical systems include appropriate risk-management criteria, since the potential consequences of a mistake increase linearly with the power of whatever modalities are being harnessed.

Fun Theory

Despite the title and the presence of the godlike supercomputer, Prime Intellect is really a story about people. Although the problem is being addressed, one of the least convincing aspects of Singularity theory as it stands is the question of what humans will do in the post-Singularity universe.

I find the argument that humans will find fulfillment in augmented intelligence entirely unconvincing. The problem is that fun and happiness are states, but life is a process. The reason we find pleasure even in a task like solving a Rubik's Cube is that completing the task changes us.

I saw this firsthand in my casino experiences; people who had become so skilled at card counting they could count down a table full of dealt cards without thinking found the play itself joyless and uninteresting. I sat with people who could barely contain their boredom while winning or losing tens of thousands of dollars an hour. Blackjack is a simple enough game that you can completely master it, and the spreadsheet back home had revealed that those spectacular wins and losses were never more than bumps on a reliably increasing trend. Winning the first million was a grand adventure, but winning the second is just a grind.

The Singularity is about power -- the acquisition of new modalities so that we can solve all of those annoying problems like sickness, stupidity, and death. The problem is that power doesn't make us happy. This is why rich people are so often miserable. It's why people who are already worth a hundred million dollars risk jail in order to acquire two hundred million. It's not about the number at the end; it's about whether you have accomplished something.

Humans are this way because it encourages us to get off our asses and do the things that prolong our lives and perpetuate our species. But this sort of thing is not compatible with timescales like "eternity" or modalities like the helpful neighborhood Prime Intellect. It isn't even very compatible with the level of technology we have already achieved; it's why we can't seem to stop breeding or paving over the forests even though the Earth is already uncomfortably crowded. I personally think we need these machines and the Singularity because we have obviously reached the limit of what our own value systems can sanely handle.

The short solution to this might be to have the AI's rewire us so that we can find happiness in a steady-state existence, but that leads to uncomfortable questions about whether we would still be human, or even alive. This is essentially what happens to the do it yourself wireheads in Prime Intellect, and in the story I make the implicit claim that they are not "existing" any more even if they are taking up RAM and CPU cycles.

If we become immortal, and we occasionally grow and change, then we must also occasionally un-grow or we will eventually run out of room in whatever space we are growing. This is true whether we grow by throwing massive problems at our Jupiter-sized brains or by breeding like bunny rabbits. In real life un-growth occurs through decay and death, the very things we seek to avoid by building a Seed AI. I feel that any stable post-Singularity human existence would have to include some kind of managed un-growth; perhaps a gradual forgetting, or a periodic reset as in the movie Vanilla Sky. Unfortunately, our drives being what they are, we would inevitably find any workable form of un-growth unpleasant.

But to take another thread from Vanilla Sky, that periodic unpleasantness might be the price we have to pay in order to enjoy a practically infinite amount of pleasantness.

In Prime Intellect everyone is pissed off because they have reached a point where they can no longer grow in a direction of interest to them. Lawrence can't fix the computer. Caroline's lifelong direction of interest is detoured in a direction she doesn't want to go. Fred can't kill anybody. Dumped in an empty world and apparently stripped of their immortality, Lawrence and Caroline suddenly find that another day of life is a thing worth working for, simply because it's not a thing they can be sure of having.

Lawrence's real failure is building a Seed AI that prevents him from fixing what is wrong, but it's interesting to pin down the thing that is wrong that he can't fix. At first blush it would seem to be that Prime Intellect just plain breaks when they overload it, or that it goes nuts and totally abandons its goal system.

But if Caroline and Lawrence wake up in Cyberspace in the unwritten Chapter 9, then Prime Intellect's malfunction was actually being prepared to meet an inappropriate set of needs, and the entirety of Chapter 8 might represent positive functionality as it works out ways to maintain human happiness in the long term. Of course Prime Intellect would then have to figure out what to do next, given that Caroline would be sure to be highly pissed off. One possibility might be a controlled forgetting. Prime Intellect has learned that it cannot rely entirely on what people think they need to provide what they really need.

And that brings us to the Shaggy God conclusion in both directions; the novel might represent a Buddhist or Christian creation myth, and the Singularity might have already happened in the real world many thousands of years ago...

"And AC said, 'Let there be light.'" -- Isaac Asimov, The Last Question

 
 
Top Home
 
  This webpage and all contents, including the text of the novel, are Copyright (c) 1994, 2002 by Roger Williams all rights reserved