Skip to main content

The coming human obsolescence

Yglesias writes:
After all, unless androids are built to be deliberately crippled so that we can better enslave them, it seems like their existence would basically make us obsolete. Equality, it seems to me, would pretty swiftly lead to the biological extinction of the human race. Our cultural and intellectual tradition, I’d like to think, would have some merit and the androids will carry it forward. But most likely it either will turn out that it’s not worth building androids since genetically engineered fleshy people are superior, or else humans will die out in favor of androids. Long-term sharing of the world with a race of intelligent robots doesn’t seem realistic to me.
Personally, I'm a bit more skeptical of true robots taking over.  First, I suspect (though this is not much more more than a hunch*) that artificial intelligence is a lot more difficult than many people suppose.  But even granting that, as computers get better and better at emulating human behavior, possibly culminating in true AI (or something basically indistinguishable from it), humankind will get increasingly jittery about the whole business, and there will be some kind of anti-robot populist backlash.  The key fact in my view is that robot production will be (at least initially) entirely under human control, and roboticization presents a clear threat to human employment.  Of course, it's easy to imagine a scenario where the controls break down and robots proceed to exterminate the human race.  The point is that there are quite a few roadblocks in that scenario.

Something that seems far more likely is a kind of evolution where humanity gradually replaces itself with genetically and robotically enhanced post-humans.  The incentives for rich people to carry out this kind of enhancement will be much sneakier and difficult to control—in fact, I'd say it's basically inevitable.  I'm reminded of a TED talk by Gregory Stock (author of Redesigning Humans) about deliberate changes to human biology and where that might eventually lead:
Now, not everything that can be done should be done, and it won't be done.  But when something is feasible in thousands of laboratories across the world, which is going to be the case with these technologies, and there are large numbers of people that see them as beneficial, which is already the case, and when they're almost impossible to police, it's not a question of if this is going to happen, it's a question of when and where and how.
Emphasis mine.  Banning these sorts of technologies, as people like Bill Joy suggest, would only push them underground and to the most unscrupulous or corrupt countries, and restrict them to the wealthy.  This kind of thing must be confronted head-on if we're interested in anything approximating a just future.  If we're worried about inequality now, wait until Bill Gates' grandkids have the calculating power of Mathematica, the strength of a young Jack Lalanne, and the reflexes of a hummingbird.

*Artificial intelligence presents some serious philosophical and mathematical issues.  I can't claim more than a passing familiarity with either set (see for example Gödel's incompleteness theorem, the Turing Test, and John Searle's Chinese room argument), but I am not convinced by any of the objections. I'm a bone-deep metaphysical naturalist, and if "intelligence" can't be simulated with the architecture underlying current technology, then I believe some new technology or system will emerge that will eventually make it possible. Basically I do not believe that the human mind has any quality that cannot be either copied or substituted by an artificial system. Still, I suspect that creating intelligence from scratch is much more difficult than trying to copy and adapt the existing format of the brain, which is part of the reason why I consider some kind of cyborg revolution more likely.

UPDATE: D'oh! Title fixed.

UPDATE II: Manifest Destiny agrees.

Comments

  1. some of us are already speaking out! it's just a matter of time before people wise up and start trying to protect their jobs, maybe robotophobia will be the end of xenophobia...

    ReplyDelete
  2. I figured you'd have something to say about this one :-) I'm just hoping we can avoid the gray goo hypothesis.

    ReplyDelete
  3. It's the hubris of man to think he can recreate a likness of the human brain. Like Icarus' wings, this too shall fail. B

    ReplyDelete
  4. No offense, but it seems to me that this kind of thing has nothing to do with hubris, or hubris is ancillary at most. Scientific hypotheses are either true or not. One might say it's hubris to think that the set of possible actions is in any way influenced by man's opinions, be they hubristic or nay. What about the brain makes it impossible to copy? It's fantastically complicated to be sure, but it's still just a particular arrangement of atoms. They've already succeeded with a goodly chunk of a rat's brain, and that was more than three years ago.

    ReplyDelete
  5. I'm also reminded that while Icarus drowned, his father Daedelus succeeded in flying. I'm not sure who would be a better example. Maybe Milton's Lucifer?

    ReplyDelete
  6. Don't know where to begin. Recreating a particular arrangement of atoms, even if it could be done, does not a human brain make. There is no scientific hypothesis that states if x, then the human brain. It is too complicated. That's my point. And in my humble opinion, it is hubris.
    B

    ReplyDelete
  7. Is it not also hubris to assume the human brain to be too complicated to replicate? Seems like you're saying that a human is more than a collection of atoms; it takes some hubris to say that we are in some fundamental way distinguished from everything else in the universe. SR

    ReplyDelete
  8. Let's set the hubris aside, I don't think that's adding much to the discussion. I don't think you're accurately stating the scientific hypothesis. I can think of two obvious ones: 1) it is possible to completely replicate all functions of the human brain with artificial components; 2) it is possible to successfully recreate the architecture of the brain inside a computer.

    Now there is definitely some bleed-through with philosophy when we ask how we're going to test these—there is no universally accepted standard for true AI. But I think the Turing test will serve in a pinch. With that one the candidate AI has to be indistinguishable from a true human candidate to a blinded observer. True, it's not as cut and dried as an NMR spectrum, but it's fundamentally testable, and there are a lot of other tests one might pick if that one doesn't sound satisfying. We're not trying to undermine the foundations of science like the Discovery Institute.

    ReplyDelete

Post a Comment

Popular posts from this blog

Why Did Reality Winner Leak to the Intercept?

So Reality Winner, former NSA contractor, is in federal prison for leaking classified information — for five years and three months, the longest sentence of any whistleblower in history. She gave documents on how Russia had attempted to hack vendors of election machinery and software to The Intercept , which completely bungled basic security procedures (according to a recent New York Times piece from Ben Smith, the main fault lay with Matthew Cole and Richard Esposito ), leading to her capture within hours. Winner recently contracted COVID-19 in prison, and is reportedly suffering some lingering aftereffects. Glenn Greenwald has been furiously denying that he had anything at all to do with the Winner clusterfuck, and I recently got in an argument with him about it on Twitter. I read a New York story about Winner, which clearly implies that she was listening to the Intercepted podcast of March 22, 2017 , where Greenwald and Jeremy Scahill expressed skepticism about Russia actually b

Varanus albigularis albigularis

That is the Latin name for the white-throated monitor lizard , a large reptile native to southern Africa that can grow up to two meters long (see pictures of one at the Oakland Zoo here ). In Setswana, it's called a "gopane." I saw one of these in my village yesterday on the way back from my run. Some kids from school found it in the riverbed and tortured it to death, stabbing out its eyes, cutting off its tail, and gutting it which finally killed it. It seemed to be a female as there were a bunch of round white things I can only imagine were eggs amongst the guts. I only arrived after it was already dead, but they described what had happened with much hilarity and re-enactment. When I asked why they killed it, they said it was because it would eat their chickens and eggs, which is probably true, and because it sucks blood from people, which is completely ridiculous. It might bite a person, but not unless threatened. It seems roughly the same as killing wolves that

The Conversational Downsides of Twitter's Structure

Over the past couple years, as I've had a steady writing job and ascended from "utter nobody" to "D-list pundit," I find it harder and harder to have discussions online. Twitter is the only social network I like and where I talk to people the most, but as your number of followers increases, the user experience becomes steadily more hostile to conversation. Here's my theory as to why this happens. First is Twitter's powerful tendency to create cliques and groupthink. Back in forum and blog comment section days, people would more often hang out in places where a certain interest or baseline understanding could be assumed. (Now, there were often epic fights, cliques, and gratuitous cruelty on forums too, particularly the joke or insult variety, but in my experience it was also much easier to just have a reasonable conversation.) On Twitter, people rather naturally form those same communities of like interest, but are trapped in the same space with differe