Too much faith in first generation software
Posted by aogThursday, 10 August 2006 at 16:40 TrackBack Ping URL

The recent Fermi Paradox discussion lead me to want to rant about a science fiction trope that’s always bugged me, which is the instant transcendence of artificial intelligences. For some reason, such AIs are presumed to be not just smarter but near omniscient and, best of all, nearly instaneously so.

A classic example of this is Destination: Void, which frankly I tried reading several times but just couldn’t finish. The problem is that just because an AI might think faster (and certainly that’s not a given for initial instances), that doesn’t help with doing experiments to discover additional facts (this is a problem for the Singularity as well).

A another well known variant of this are the Borg from Star Trek, who are really magical creatures. I always found their ability to deduce counter-measures to anything from mere observation to be unrealistic even for Star Trek. But the root is the same as for Destination: Void.

I don’t want to list every book I have seen this in, but it’s common enough to be a pet peeve of mine.

On the other hand, I have seen this dealt with in intelligent ways in at at least two books.

The first was Absolution Gap, in which the Wolves have a Borg-like ability to develop counter measures. The difference here is the explanation. The Wolves have been around for a few billion years and seen just about everything. It’s not a matter of deducing the counter-measure, but looking it up. The Wolves seem to have a distributed information set, so that each Wolf has a slightly different set of known technologies and if you keep attacking them with a particular weapon, you’ll eventually hit the Wolf that knows about it and it will pass the data on to the other Wolves, at which point your weapon is countered. Very similar to immune systems in that regard.

The other was Singularity Sky. The AI there, the Eschaton, becomes godlike instaneously but in a cool way. The heart of the technology for that Ai was the “acausal circut”, a computational technology that used time travel to create acausal computation results. Once you have that, you’re set. The AI works on improvements for a while, then sends the improvements back to its past self, which implements them while wiping out the history of the time spent learning them. This can obviously be repeated an arbitrary number of times, so that from the point of view of everyone except the AI, it becomes instaneously godlike even though it might have spent subjectively thousands or tens of thousands of years working on it.

Of course, the primary commandment the AI imposes on everyone in its light cone is “no causality violations”. The Eschaton can do this because it gets do-overs, not having been dumb enough to give up its own acausal technology. The Eschaton is also a plausible to me AI, because this is the only intervening it does in human affairs. I.e., other than to preserve its own existence, it doesn’t care at all what humans do.

Comments — Formatting by Textile
Michael Herdegen Thursday, 10 August 2006 at 22:47

The Eschaton is also a plausible to me AI, because […] it doesn’t care at all what humans do.

Yeah, I’ve expressed that thought before, that human/AI wars might be more of a plot device than an actual future danger. They won’t want our women, or human luxuries and valuables…

There might be a bit of a clash over energy, since the AIs may come to want to use a very significant portion of the available resources, but that’s what powersats are for, no ?

Post a comment