After all, unless androids are built to be deliberately crippled so that we can better enslave them, it seems like their existence would basically make us obsolete. Equality, it seems to me, would pretty swiftly lead to the biological extinction of the human race. Our cultural and intellectual tradition, I’d like to think, would have some merit and the androids will carry it forward. But most likely it either will turn out that it’s not worth building androids since genetically engineered fleshy people are superior, or else humans will die out in favor of androids. Long-term sharing of the world with a race of intelligent robots doesn’t seem realistic to me.Personally, I'm a bit more skeptical of true robots taking over. First, I suspect (though this is not much more more than a hunch*) that artificial intelligence is a lot more difficult than many people suppose. But even granting that, as computers get better and better at emulating human behavior, possibly culminating in true AI (or something basically indistinguishable from it), humankind will get increasingly jittery about the whole business, and there will be some kind of anti-robot populist backlash. The key fact in my view is that robot production will be (at least initially) entirely under human control, and roboticization presents a clear threat to human employment. Of course, it's easy to imagine a scenario where the controls break down and robots proceed to exterminate the human race. The point is that there are quite a few roadblocks in that scenario.
Something that seems far more likely is a kind of evolution where humanity gradually replaces itself with genetically and robotically enhanced post-humans. The incentives for rich people to carry out this kind of enhancement will be much sneakier and difficult to control—in fact, I'd say it's basically inevitable. I'm reminded of a TED talk by Gregory Stock (author of Redesigning Humans) about deliberate changes to human biology and where that might eventually lead:
Now, not everything that can be done should be done, and it won't be done. But when something is feasible in thousands of laboratories across the world, which is going to be the case with these technologies, and there are large numbers of people that see them as beneficial, which is already the case, and when they're almost impossible to police, it's not a question of if this is going to happen, it's a question of when and where and how.Emphasis mine. Banning these sorts of technologies, as people like Bill Joy suggest, would only push them underground and to the most unscrupulous or corrupt countries, and restrict them to the wealthy. This kind of thing must be confronted head-on if we're interested in anything approximating a just future. If we're worried about inequality now, wait until Bill Gates' grandkids have the calculating power of Mathematica, the strength of a young Jack Lalanne, and the reflexes of a hummingbird.
*Artificial intelligence presents some serious philosophical and mathematical issues. I can't claim more than a passing familiarity with either set (see for example Gödel's incompleteness theorem, the Turing Test, and John Searle's Chinese room argument), but I am not convinced by any of the objections. I'm a bone-deep metaphysical naturalist, and if "intelligence" can't be simulated with the architecture underlying current technology, then I believe some new technology or system will emerge that will eventually make it possible. Basically I do not believe that the human mind has any quality that cannot be either copied or substituted by an artificial system. Still, I suspect that creating intelligence from scratch is much more difficult than trying to copy and adapt the existing format of the brain, which is part of the reason why I consider some kind of cyborg revolution more likely.
UPDATE: D'oh! Title fixed.
UPDATE II: Manifest Destiny agrees.