Invited remarks at Cyberfest ’97, Champaign, IL
Well, we’re here to celebrate HAL’s birthday, but as I’m sure most of you have noticed, HAL isn’t here. So one of the main things I want to do here is to talk a little about how that happened. Why isn’t there a HAL here? I guess most people will say that it was just that technology hasn’t progressed fast enough; that Stanley Kubrick and Arthur C. Clarke were just too optimistic about the development of computers, just as they were too optimistic about the development of space.
But I don’t think so. And in fact I think that it’s basically just an accident. It’s something that could have gone either way, and it just turns out to have gone the way that means that the only thinking things around today are still just humans.
When electronic computers were first being built in the ’40s and ’50s, most people thought that real "electronic brains" were just a few years away. And actually the early experiments that were done in what became known as artificial intelligence kind of assumed that it would be pretty easy to replicate what the brain does.
But it turned out that those early experiments failed. And by the 1970s things had got to the point where you had to be pretty kooky to think it would be in any way easy to make a machine that thinks. The field of artificial intelligence, such as it was, had become a pretty academic business, where people didn’t expect anything very big to happen, and instead looked at pretty modest and technical kinds of things.
So, right now for instance, I don’t think there’s anyone serious anywhere who’s really trying to build a machine that thinks anything like a HAL. It’s something that’s just been abandoned as being somehow impossible.
Well, it’s funny to look at the history of science, and look at other things that got abandoned as impossible. One that’s becoming pretty popular recently is cloning—the cloning of adult organisms.
I don’t know all of the history, but I know that as soon as genetics was decently understood, people began to think about cloning. And certainly lots of science fiction started assuming that cloning would be possible. But the early, obvious experiments on cloning didn’t work, and after some time had gone by, you had to be kind of a kook to think that cloning would be possible, so pretty much nobody serious actually tried to do it, and actually various theories grew up in biology that had it sort of as an axiom that cloning wasn’t possible.
Well, as of a few weeks ago, we all know now that cloning is actually possible. And not only is it possible—it’s actually very easy. In fact, looking back on it, we know that cloning could have been done 20, 30, maybe more years ago. It was just that nobody seriously tried to do it.
So the question is: is that what’s going to happen with machines that think? When there are machines that think— and I have absolutely no doubt that there will be—are we going to look back and say: "boy, all of this could definitely have been discovered in the ’90s…perhaps even the 80’s…how on earth did it get missed; why were all those people so dumb?"
Well, my guess is that we are. I don’t think it’s going to take another generation of computers, or any kind of fancy supercomputers. I think that just the standard computers we have right now have big enough memories and are are fast enough to do it. The problem, I think, is that first of all nobody’s actually trying seriously to do it, and second of all, perhaps most importantly, we don’t have some of the rather basic scientific ideas that we need.
So what are those ideas? Well, I don’t know for sure, and I certainly won’t be able to say much about them in the few minutes I have here, but I’ve been developing for quite a few years a new kind of science which I think is beginning to give one some insights about where one should be looking for the ideas that one needs.
I think they’re pretty basic, general, ideas. They are not detail ideas, about how this or that specific cognitive structure can be built. And I’ll be amazed if studying details of how our brain works will help much in figuring them out—just like studying details of bird flight—beyond noticing that birds have wings—didn’t help much in building the first airplanes.
Actually, I think the most important thing is really just believing that it might seriously be possible to build a machine that thinks, and then going ahead with the right kind of scientific intuition to try to do it.
It’s sort of amazing when one looks at the history of science how many important things got discovered long long after their time. The most common thing in science is that once some new kind of technology is available, the obvious discoveries get made within about five years, sometimes ten. And that’s certainly what’s happened in areas like particle physics and planetary science, as well as quite a bit of biology.
But then there are the stragglers; the things that could have been discovered before, but it just doesn’t happen, because the right person doesn’t come along, and people in general don’t imagine the discoveries are possible, or have the wrong intuition about them, and then after a few years it gets assumed that the discoveries are really impossible.
One pretty good example of this is the telescope. People had been making lenses and even wearing glasses for hundreds of years before there were telescopes. And in fact Kepler—who was a very great astronomer—actually himself wore glasses. But neither he nor anyone else seriously imagined that one could make a telescope, so it took nearly another hundred years for anyone to think of putting two lenses together and making a telescope.
Computers are another example, in some ways even more egregious. There’s really one big idea—perhaps the most important idea of this century—that is what’s made computers and software possible: the idea of universal computing—the idea that you can have a single universal piece of hardware that you can make do anything just by reprogramming it. It’s kind of the next step from the idea that’s portrayed early in 2001 of the discovery of tools by Moonwatcher the Ape. But now, instead of just having one tool for this, and another tool for that, we have a universal tool—a tool that can be reprogrammed to do anything we want.
Well, this idea of universal computing was figured out by Alan Turing and other people in the mid-1930s. But there’s really nothing about it that wouldn’t have allowed it to be figured out certainly in the 1800s, and quite possibly even as far back as the 1600s. Gottfried Leibniz was almost on to it in the late 1600s. And certainly people like Charles Babbage in the mid-1800s, and then Gottlob Frege and Giuseppe Peano in the late 1800s were really close. But somehow nobody guessed that something quite as bold as a universal computer was actually possible. And so out of all those electromechanical components in the early part of this century, no computer ever got built.
Well, making big discoveries is always difficult, and of course these days the hugeness of the scientific establishment, and the whole collective view of science, hasn’t made it any easier.
But ultimately most big discoveries—particularly the straggling ones that get made long after they were supposed to be impossible—get made by one person being bold and deciding that something might be possible, and really trying seriously to do it.
I guess that I myself have spent quite a lot of years doing things just like that—that were supposed to be impossible. In fact, I guess that’s kind of my specialty. And apart from doing things like that in the practical, technological, arena, I’ve also spent a good part of the past 17 years doing it in basic science.
I have a big project that’ll hopefully see the light of day in my book A New Kind of Science that’s about a new way to look at science, and that seems to shed light on a lot of important questions in physics, biology and other things—including, maybe, for example, an ultimate model of physics.
But for me what’s great about it is that the things I’ve discovered are things that are probably some of the longest-lived stragglers of them all. They are things about how simple computer programs work. But they really don’t depend on computers at all; they are really about what the consequences of following various simple rules are. And it turns out that a lot of them could have been found out not just a few years ago, or even earlier this century, but actually thousands of years ago. They’re things the Babylonians could probably have discovered. I’ve always been thinking that maybe one day someone will unearth a Babylonian relic with one of my diagrams on it. But I doubt it. Because if people had really seen this stuff so long ago, science would no doubt have evolved very differently from the way it has.
I don’t have time to talk more about all this right now. But I’ll just
mention one last area where I think we may have a very big accident going on, that’s central to 2001 and that happens to be an area that’s illuminated by my new science. And that’s the area of extraterrestrial intelligence.
Why haven’t we found it? Well, I suspect it’s because we haven’t looked it in any kind of sensible way. And we haven’t really understood what it is that we expect to find.
I don’t think I have any time left to talk about this, but there are some big and interesting issues about just how you know that some data you get comes from something intelligent, as opposed to just coming from a simple physical process or something.
I think I’d better stop here; perhaps we can go into this stuff in more detail in the discussion period if people are interested.
But I just want to leave you with the thought that two of the key mispredictions of 2001: the lack of a HAL, and the lack of any sign of extraterrestrial intelligence, may actually turn out not to be embarrassments for the makers of 2001, but instead embarrassments for us. I think they’re things we could easily have had, but we just blew it.