Feb 17, 2011

The coming human obsolescence

Yglesias writes:
After all, unless androids are built to be deliberately crippled so that we can better enslave them, it seems like their existence would basically make us obsolete. Equality, it seems to me, would pretty swiftly lead to the biological extinction of the human race. Our cultural and intellectual tradition, I’d like to think, would have some merit and the androids will carry it forward. But most likely it either will turn out that it’s not worth building androids since genetically engineered fleshy people are superior, or else humans will die out in favor of androids. Long-term sharing of the world with a race of intelligent robots doesn’t seem realistic to me.
Personally, I'm a bit more skeptical of true robots taking over.  First, I suspect (though this is not much more more than a hunch*) that artificial intelligence is a lot more difficult than many people suppose.  But even granting that, as computers get better and better at emulating human behavior, possibly culminating in true AI (or something basically indistinguishable from it), humankind will get increasingly jittery about the whole business, and there will be some kind of anti-robot populist backlash.  The key fact in my view is that robot production will be (at least initially) entirely under human control, and roboticization presents a clear threat to human employment.  Of course, it's easy to imagine a scenario where the controls break down and robots proceed to exterminate the human race.  The point is that there are quite a few roadblocks in that scenario.

Something that seems far more likely is a kind of evolution where humanity gradually replaces itself with genetically and robotically enhanced post-humans.  The incentives for rich people to carry out this kind of enhancement will be much sneakier and difficult to control—in fact, I'd say it's basically inevitable.  I'm reminded of a TED talk by Gregory Stock (author of Redesigning Humans) about deliberate changes to human biology and where that might eventually lead:
Now, not everything that can be done should be done, and it won't be done.  But when something is feasible in thousands of laboratories across the world, which is going to be the case with these technologies, and there are large numbers of people that see them as beneficial, which is already the case, and when they're almost impossible to police, it's not a question of if this is going to happen, it's a question of when and where and how.
Emphasis mine.  Banning these sorts of technologies, as people like Bill Joy suggest, would only push them underground and to the most unscrupulous or corrupt countries, and restrict them to the wealthy.  This kind of thing must be confronted head-on if we're interested in anything approximating a just future.  If we're worried about inequality now, wait until Bill Gates' grandkids have the calculating power of Mathematica, the strength of a young Jack Lalanne, and the reflexes of a hummingbird.

*Artificial intelligence presents some serious philosophical and mathematical issues.  I can't claim more than a passing familiarity with either set (see for example Gödel's incompleteness theorem, the Turing Test, and John Searle's Chinese room argument), but I am not convinced by any of the objections. I'm a bone-deep metaphysical naturalist, and if "intelligence" can't be simulated with the architecture underlying current technology, then I believe some new technology or system will emerge that will eventually make it possible. Basically I do not believe that the human mind has any quality that cannot be either copied or substituted by an artificial system. Still, I suspect that creating intelligence from scratch is much more difficult than trying to copy and adapt the existing format of the brain, which is part of the reason why I consider some kind of cyborg revolution more likely.

UPDATE: D'oh! Title fixed.

UPDATE II: Manifest Destiny agrees.


  1. some of us are already speaking out! it's just a matter of time before people wise up and start trying to protect their jobs, maybe robotophobia will be the end of xenophobia...

  2. I figured you'd have something to say about this one :-) I'm just hoping we can avoid the gray goo hypothesis.

  3. It's the hubris of man to think he can recreate a likness of the human brain. Like Icarus' wings, this too shall fail. B

  4. No offense, but it seems to me that this kind of thing has nothing to do with hubris, or hubris is ancillary at most. Scientific hypotheses are either true or not. One might say it's hubris to think that the set of possible actions is in any way influenced by man's opinions, be they hubristic or nay. What about the brain makes it impossible to copy? It's fantastically complicated to be sure, but it's still just a particular arrangement of atoms. They've already succeeded with a goodly chunk of a rat's brain, and that was more than three years ago.

  5. I'm also reminded that while Icarus drowned, his father Daedelus succeeded in flying. I'm not sure who would be a better example. Maybe Milton's Lucifer?

  6. Don't know where to begin. Recreating a particular arrangement of atoms, even if it could be done, does not a human brain make. There is no scientific hypothesis that states if x, then the human brain. It is too complicated. That's my point. And in my humble opinion, it is hubris.

  7. Is it not also hubris to assume the human brain to be too complicated to replicate? Seems like you're saying that a human is more than a collection of atoms; it takes some hubris to say that we are in some fundamental way distinguished from everything else in the universe. SR

  8. Let's set the hubris aside, I don't think that's adding much to the discussion. I don't think you're accurately stating the scientific hypothesis. I can think of two obvious ones: 1) it is possible to completely replicate all functions of the human brain with artificial components; 2) it is possible to successfully recreate the architecture of the brain inside a computer.

    Now there is definitely some bleed-through with philosophy when we ask how we're going to test these—there is no universally accepted standard for true AI. But I think the Turing test will serve in a pinch. With that one the candidate AI has to be indistinguishable from a true human candidate to a blinded observer. True, it's not as cut and dried as an NMR spectrum, but it's fundamentally testable, and there are a lot of other tests one might pick if that one doesn't sound satisfying. We're not trying to undermine the foundations of science like the Discovery Institute.