Skip to main content

The coming human obsolescence

Yglesias writes:
After all, unless androids are built to be deliberately crippled so that we can better enslave them, it seems like their existence would basically make us obsolete. Equality, it seems to me, would pretty swiftly lead to the biological extinction of the human race. Our cultural and intellectual tradition, I’d like to think, would have some merit and the androids will carry it forward. But most likely it either will turn out that it’s not worth building androids since genetically engineered fleshy people are superior, or else humans will die out in favor of androids. Long-term sharing of the world with a race of intelligent robots doesn’t seem realistic to me.
Personally, I'm a bit more skeptical of true robots taking over.  First, I suspect (though this is not much more more than a hunch*) that artificial intelligence is a lot more difficult than many people suppose.  But even granting that, as computers get better and better at emulating human behavior, possibly culminating in true AI (or something basically indistinguishable from it), humankind will get increasingly jittery about the whole business, and there will be some kind of anti-robot populist backlash.  The key fact in my view is that robot production will be (at least initially) entirely under human control, and roboticization presents a clear threat to human employment.  Of course, it's easy to imagine a scenario where the controls break down and robots proceed to exterminate the human race.  The point is that there are quite a few roadblocks in that scenario.

Something that seems far more likely is a kind of evolution where humanity gradually replaces itself with genetically and robotically enhanced post-humans.  The incentives for rich people to carry out this kind of enhancement will be much sneakier and difficult to control—in fact, I'd say it's basically inevitable.  I'm reminded of a TED talk by Gregory Stock (author of Redesigning Humans) about deliberate changes to human biology and where that might eventually lead:
Now, not everything that can be done should be done, and it won't be done.  But when something is feasible in thousands of laboratories across the world, which is going to be the case with these technologies, and there are large numbers of people that see them as beneficial, which is already the case, and when they're almost impossible to police, it's not a question of if this is going to happen, it's a question of when and where and how.
Emphasis mine.  Banning these sorts of technologies, as people like Bill Joy suggest, would only push them underground and to the most unscrupulous or corrupt countries, and restrict them to the wealthy.  This kind of thing must be confronted head-on if we're interested in anything approximating a just future.  If we're worried about inequality now, wait until Bill Gates' grandkids have the calculating power of Mathematica, the strength of a young Jack Lalanne, and the reflexes of a hummingbird.

*Artificial intelligence presents some serious philosophical and mathematical issues.  I can't claim more than a passing familiarity with either set (see for example Gödel's incompleteness theorem, the Turing Test, and John Searle's Chinese room argument), but I am not convinced by any of the objections. I'm a bone-deep metaphysical naturalist, and if "intelligence" can't be simulated with the architecture underlying current technology, then I believe some new technology or system will emerge that will eventually make it possible. Basically I do not believe that the human mind has any quality that cannot be either copied or substituted by an artificial system. Still, I suspect that creating intelligence from scratch is much more difficult than trying to copy and adapt the existing format of the brain, which is part of the reason why I consider some kind of cyborg revolution more likely.

UPDATE: D'oh! Title fixed.

UPDATE II: Manifest Destiny agrees.

Comments

  1. some of us are already speaking out! it's just a matter of time before people wise up and start trying to protect their jobs, maybe robotophobia will be the end of xenophobia...

    ReplyDelete
  2. I figured you'd have something to say about this one :-) I'm just hoping we can avoid the gray goo hypothesis.

    ReplyDelete
  3. It's the hubris of man to think he can recreate a likness of the human brain. Like Icarus' wings, this too shall fail. B

    ReplyDelete
  4. No offense, but it seems to me that this kind of thing has nothing to do with hubris, or hubris is ancillary at most. Scientific hypotheses are either true or not. One might say it's hubris to think that the set of possible actions is in any way influenced by man's opinions, be they hubristic or nay. What about the brain makes it impossible to copy? It's fantastically complicated to be sure, but it's still just a particular arrangement of atoms. They've already succeeded with a goodly chunk of a rat's brain, and that was more than three years ago.

    ReplyDelete
  5. I'm also reminded that while Icarus drowned, his father Daedelus succeeded in flying. I'm not sure who would be a better example. Maybe Milton's Lucifer?

    ReplyDelete
  6. Don't know where to begin. Recreating a particular arrangement of atoms, even if it could be done, does not a human brain make. There is no scientific hypothesis that states if x, then the human brain. It is too complicated. That's my point. And in my humble opinion, it is hubris.
    B

    ReplyDelete
  7. Is it not also hubris to assume the human brain to be too complicated to replicate? Seems like you're saying that a human is more than a collection of atoms; it takes some hubris to say that we are in some fundamental way distinguished from everything else in the universe. SR

    ReplyDelete
  8. Let's set the hubris aside, I don't think that's adding much to the discussion. I don't think you're accurately stating the scientific hypothesis. I can think of two obvious ones: 1) it is possible to completely replicate all functions of the human brain with artificial components; 2) it is possible to successfully recreate the architecture of the brain inside a computer.

    Now there is definitely some bleed-through with philosophy when we ask how we're going to test these—there is no universally accepted standard for true AI. But I think the Turing test will serve in a pinch. With that one the candidate AI has to be indistinguishable from a true human candidate to a blinded observer. True, it's not as cut and dried as an NMR spectrum, but it's fundamentally testable, and there are a lot of other tests one might pick if that one doesn't sound satisfying. We're not trying to undermine the foundations of science like the Discovery Institute.

    ReplyDelete

Post a Comment

Popular posts from this blog

Why Did Reality Winner Leak to the Intercept?

So Reality Winner, former NSA contractor, is in federal prison for leaking classified information — for five years and three months, the longest sentence of any whistleblower in history. She gave documents on how Russia had attempted to hack vendors of election machinery and software to The Intercept , which completely bungled basic security procedures (according to a recent New York Times piece from Ben Smith, the main fault lay with Matthew Cole and Richard Esposito ), leading to her capture within hours. Winner recently contracted COVID-19 in prison, and is reportedly suffering some lingering aftereffects. Glenn Greenwald has been furiously denying that he had anything at all to do with the Winner clusterfuck, and I recently got in an argument with him about it on Twitter. I read a New York story about Winner, which clearly implies that she was listening to the Intercepted podcast of March 22, 2017 , where Greenwald and Jeremy Scahill expressed skepticism about Russia actually b

The Basic Instinct of Socialism

This year I finally decided to stop beating around the bush and start calling myself a democratic socialist. I think the reason for the long hesitation is the very long record of horrifying atrocities carried out by self-described socialist countries. Of course, there is no social system that doesn't have a long, bloody rap sheet, capitalism very much included . But I've never described myself as a capitalist either, and the whole point of socialism is that it's supposed to be better than that. So of course I cannot be a tankie — Stalin and Mao were evil, terrible butchers, some of the worst people who ever lived. There are two basic lessons to be learned from the failures of Soviet and Chinese Communism, I think. One is that Marxism-Leninism is not a just or workable system. One cannot simply skip over capitalist development, and any socialist project must be democratic and preserve basic liberal freedoms. The second, perhaps more profound lesson, is that there is no s

Varanus albigularis albigularis

That is the Latin name for the white-throated monitor lizard , a large reptile native to southern Africa that can grow up to two meters long (see pictures of one at the Oakland Zoo here ). In Setswana, it's called a "gopane." I saw one of these in my village yesterday on the way back from my run. Some kids from school found it in the riverbed and tortured it to death, stabbing out its eyes, cutting off its tail, and gutting it which finally killed it. It seemed to be a female as there were a bunch of round white things I can only imagine were eggs amongst the guts. I only arrived after it was already dead, but they described what had happened with much hilarity and re-enactment. When I asked why they killed it, they said it was because it would eat their chickens and eggs, which is probably true, and because it sucks blood from people, which is completely ridiculous. It might bite a person, but not unless threatened. It seems roughly the same as killing wolves that