UBIQUITY: What kind of reactions have there been to your article in
Wired?
BILL JOY: In general, the response has been extremely positive. The three most
common themes have been: "You know, I'm glad someone finally wrote this" --
and "What can I do to help?" -- and "Reading this has left me upset!" But of
course there were also some who were skeptical about what I'm saying.
UBIQUITY: How would you characterize the skepticism?
JOY: It comes from people who think that "human cleverness" always wins out
-- though their convictions are usually expressed without any reference to
the facts of the present situation. And I know that some of them never
actually read my paper. Someone merely gave them a one-sentence description
of it -- just enough to feel they can rebut my arguments by trotting out
things they think are always true. But this isn't good enough, because
seemingly absolute truths often admit exceptions. And so while I would agree
that, in general, ingenuity can solve many or even most problems, I
think we unfortunately face one case where that's just not true. If we
empower everybody to have the kind of incredible power I discuss in
the paper, then we're unlikely to be able to construct a defense against
catastrophic accidents or against extreme individuals doing crazy things.
UBIQUITY: On that point, how did people feel about your use -- and apparent
endorsement -- of a long quotation from Theodore Kaczynski, the "Unabomber"?
JOY: A lot of people got thrown by it. But look, I'm not saying that
Kaczynski's not insane or that he's not a criminal. What Kacynski did is
absolutely unjustifiable, and I don't agree with the full picture he paints.
All I'm saying is that some of his remarks, and the fact that Ray
Kurzweil quoted them, disturbed me enough to lead me to reexamine some
things I hadn't thought about enough. Just because somebody has been guilty
of bad behavior doesn't mean his ideas should be entirely dismissed: a
position is not responsible for its proponents. In any event, my argument
certainly doesn't need Kacynski. Many great 20th century
thinkers, from Bertrand Russell to Lewis Mumford, have written about the
dangers of science when it is used for the wrong purposes.
UBIQUITY: Obviously, there are tradeoffs with regard to any kind of
technology. For example, a new drug will have side effects. Does that point
lessen the impact of your argument?
JOY: No, it doesn't. My concern is specifically that if we create extremely
powerful technology and then we give it to everybody -- just as we've
given everybody a PC and the ability to create viruses then
somebody will try to do bad things, or do harm accidentally. And if the
ultimate consequence of those bad things is that the biosphere is destroyed
or that enormous numbers of people are killed, then that's unacceptable!
UBIQUITY: What about actions far short of that ultimate consequence?
JOY: There are many other bad scenarios, short of extinction, which too much
power in too many hands will make likely. I would also note that we've got a
funny situation now, where an unacceptable end result is a consequence of a
thousand different things -- all of which are themselves (considered
individually) justifiable. But since the end is both unjustifiable
and totally unacceptable, then we're simply going to have to make some clear
decisions to leave some options unexplored. We can't just proceed to develop
anything and everything that makes a lot of money, if the end result of all
that development is likely disaster. The marketplace is not a very ethical
mechanism or one mindful of ultimate consequences.
UBIQUITY: Who exactly should get to think about these issues?
JOY: Well, for one thing, we ought to involve philosophers and spiritual and
religious people, so that the discussion can find the right level. I
definitely don't think it could be left to scientists and technologists
alone because the problem and its solution will affect our future in a very
fundamental way. We must have discussions involving people from all
parts of society to decide what to do in a difficult situation like we face.
We shouldn't simply go ahead and turn ourselves into robots just because we
can make money by doing so.
UBIQUITY: Do you think the issues are ready to be voted on, democratically,
by all the people?
JOY: No, I don't think they are, not yet. I don't think most people even
take the problems seriously. A discussion will need to happen over the next
10 years. But we can't wait forever to act, because if we begin creating
these super-intelligent, super-powerful robots, it's not at all clear that
we can coexist with them. We'll be basically bringing a predator into a
world that would put us in second place, and I think evolution would view us
as inferior species. I just don't know what would happen to us. We need to
decide whether we want to let R&D; go wherever it wants. Personally, I think
it will have to be curtailed in some way.
UBIQUITY: Won't people think that course a little drastic?
JOY: We'll see. People may take this problem as a joke or a terminator movie
or some kind of sci-fi thing -- where the hero discovers the problem,
struggles with it, and vanquishes it, and the audience sees a happy ending.
But in the natural world plots don't always have happy endings. Look
at the fossil record and you'll see that many many species are extinct. It's
not hard to see that, if you unleash all the power of the Universe -- all
the powers of physics and chemistry and the information sciences -- you're
unleashing an incredible, unimaginable amount of power. The atom bomb
was only a dry run for what could ultimately happen, because the bomb
unleashed the power of just one particular kind of chain reaction. But these
viruses we're talking about now are self-replicating chain reactions,
that can transform everything on earth.
UBIQUITY: But forgetting about science fiction, isn't there a way in which
humans really do change for the better because of foods and drugs and
tools and so on?
JOY: Well, the truth is that most of the problems that people propose to
solve with these advanced technologies also have simpler solutions in
existing and less-harmful technologies. Some of the work of Amory B. Lovins,
for example, in books like Natural Capitalism, points out that most
of human-engineered systems are incredibly inefficient. We can do a lot more
for people to make people's lives better and improve their health by
becoming more resource-efficient.
UBIQUITY: With the help of technology? Or without it?
JOY: Well, on that point, someone came up to me after I gave a talk I gave
and told me, "I have a way of using nanotechnology to clean water." My
answer was: "Well, I have a way of cleaning water with a solar panel, a
little pump, a light bulb and a piece of mesh fiber." The knee-jerk
high-tech tendency is, obviously, to do things that make profits and are
protectable by patents, and often very complex. But there are often
very-much simpler ways of dealing with many of these issues.
UBIQUITY: Is part of the problem, though, that we're not really talking
about some single system, but about thousands and thousands of
systems and subsystems -- and that it's very hard for anyone -- whether
expert or non-expert -- to think of a way of undoing or reconfiguring all
the interconnections?
JOY: You're right; that's a key problem, made worse because people tend to
be fatalistic about the overall system being "beyond our control." But I
would point out as Aristotle pointed out -- that what distinguishes us
from the animals is that we have reason. And with the Internet and
other collective communication tools, we can use our reason to make
the decision to change some of the directions in which we're heading. I
grant you that some of these forces may be hard to stop -- but if we
collectively decide we don't like where they're taking us, then we can come
up with alternatives.
UBIQUITY: How will the alternatives be developed?
JOY: Environmentalists have pointed out that we need to change the
economic signals given to companies. Current signals are often
mindlessly anti-environment.
UBIQUITY: An example?
JOY: One example is that we have to change the economic signals so that the
forests become worth as much standing as they are cut, by measuring the true
value of the larger ecosystem, and not just tree by tree.
UBIQUITY: How would that be accomplished?
JOY: We know that we can't enforce everything through regulations, but I can
imagine a world where the economic signals are changed in a way that
encourages the kind of behavior that leads to survival. Extinction is a
pretty good signal, but it takes too long to arrive. So let's have
smart people figure out how to send those signals to us earlier. And I don't
think that impossible. I have, for example, suggested that companies that
wish to commercialize dangerous technologies have to take out insurance
against large-scale risks. This is a private sector solution to a big
problem: without such a mechanism the taxpayers end up paying for the clean
up.
UBIQUITY: Can you imagine a radical solution to all of this? Ivan Illich,
for example, seemed able to imagine a world without airplanes and
automobiles and so forth. A world in which all of the developing companies
-- though maybe not the developed ones -- deliberately kept their systems
centered around the bicycle as the basic means of transportation. Can you
imagine a world like that?
JOY: Well, airplanes and automobiles are not a problem of the same
dimensions I'm talking about. The real problem I'm worried about is
self-replication, combined with the dematerialization of technology into
bits and information bits and information that people can reconfigure for
destructive purposes. And I can imagine a world where we prevent everybody
from getting that kind of power.
UBIQUITY: How? What could you offer as an example?
JOY: Consider the "Love Bug" virus, which was made possible because
Microsoft didn't get economic signals; the kind of sloppiness this
represents is something that we just can't afford in the physical world
technologies. But Microsoft didn't have any economic signals to tell it that
its software should be of higher quality, nor did the company feel that it
was likely to be liable for ecological consequences of deploying software
that was well-known to have problems with viruses, or it would certainly
have acted differently. We have to put the signals in so that we don't see
similarly sloppy kinds of collective behavior in the biological, physical,
and robotics sciences.
UBIQUITY: Are you saying that the essence of the problem is technological
sloppiness rather than a failing in human character?
JOY: No, I'm not technologists will always make mistakes; the problem is a
system that does not take ethical responsibility for consequences. This is
where I part ways with J. Robert Oppenheimer, whom I quoted in the article
and who essentially said: We're scientists -- we just do our work and we're
not responsible for the consequences! Well, I don't think that's a tenable
position. Clearly, when what you're doing is you're empowering so many
people, to such great harm, I don't think you can avoid responsibility for
the consequences of what those people do.
UBIQUITY: How far does that principle extend?
JOY: Pretty far. I think now we've even come to recognize that people who
make, say, submachine guns, have a responsibility for the consequences of
their use, because we have no reasonable use for such things in a civil
society. We can argue about handguns, but the argument will really hinge on
the extent of the danger handguns pose. If enough people got shot with
handguns, eventually we would change the rules of their ownership.
UBIQUITY: How would one make such a determination?
JOY: It's a threshold question. When we feel that these things threaten our
personal safety, in the civilization that we're in, then we act. But we
act very late. With genetic engineering, nanotech, and robotics, we're
creating a situation where the eventual consequences will clearly be bad, so
we're going to have to act early because we can't just wait until we've
given everybody the ability to do bad things and only then try to put
a lid on their activity. By that time we may not have any ability to insert
effective controls. So we're going to have to imagine the consequences and
act in advance. That's one of the reasons why I worked so hard on the
Wired piece. It's much, much harder to get people to act in
advance -- to give something up before they've seen the
consequence. But if we wait until everybody has the ability to do
these things, it will too late!
UBIQUITY: With regard to the issues you've raised, have you personally done
anything different in your own life, in terms of projects or anything of
that order?
JOY: Well, I'm personally working on reliable computing, and reliability is
a defensive technology, that has to be perfect.
UBIQUITY: Perfect?
JOY: Yes. Let's look at the Strategic Defense Initiative (SDI) so-called
"Star Wars." Now I don't think SDI can work, but let's suspend disbelief for
a moment and say: All these missiles are coming in but we have this
technology that can shoot them down and what we need is the software to
allow us not to miss any in-coming missiles! So what happens? If I'm the
defense, I need extremely high quality software. But if I'm the offense,
what do I do? I just need to unleash overwhelming force. I don't need
to be perfect. I just need to overwhelm the defense. So, reliable computing
is a defensive technology, and it needs to be looked at the same way you
look at the safety of your house. Burglars can go along the streets hoping
to find a house where the burglar alarm isn't working, because probably most
burglars don't really care which house they burglarize. The moral? Defense
is aided by reliable software in a way unnecessary for offense.
UBIQUITY: And you're on the defense team?
JOY: That's right. What I've decided to work on is technology that can be
used in situations where human life is at stake -- where we need to defend
against something. You see, things like monitoring systems, 911 systems,
healthcare, air traffic -- all need this kind of reliability. Yet up till
now, software designers have been doing a very poor job of it, because
usually if some piece of hardware fails the whole system fails. Besides
that, we haven't (until recently) known how to write systems where the
system can work even if the software is broken. But now there are techniques
for doing that, and these provide relative and large advantages to the
defense.
UBIQUITY: Do you feel guilty about all the great work for which you've so
well known?
JOY: I certainly feel uneasy about it. At the time I was doing it, I
never had any idea about the degree to which all this was going to turn out
to be a problem. Of course, I don't think most people did -- and I don't
think most people do right now. It used to be the case that people who
wanted to practice these things did so in laboratories, in groups with other
scientists, all of whom practiced a code of ethics. They would talk about
what they were doing and would share their joint concerns with their peers.
But now this stuff just becomes information that can be sent anywhere via
the Internet allowing people to do things without ever having had a
conversation, without existing in that communal context. So somebody can be
a loner in a very distant place -- and smoke something or be psychotic and
have no social control at all. A lot of them are taught to hack around and
certainly there are points in one's life where it's fun to try things. But
for the adults the question becomes: Just how much power do we put in front
of someone who's goofing around? If it's the power to change everything on
the planet, then I don't think there's any argument. That just way over the
top!
UBIQUITY: What's the answer then?
JOY: We're just going to have to build a social contract around the
information technologies. We're just beginning the Information Age, yet
we're going to need this right away, because things are happening now so
much more quickly than they ever did before. More quickly all the time.
Articles
COMMENTS