FC: More on Singularity and group releases Friendly AI guidelines

From: Declan McCullagh (declanat_private)
Date: Thu Apr 19 2001 - 20:02:15 PDT

  • Next message: Declan McCullagh: "FC: FTC action against kids sites smacks of hypocrisy, by Lizard"

    [This is a response to my article today about the Singularity Institute and 
    Friendly AI, which is at http://www.politechbot.com/p-01934.html -- it's an 
    excerpt from an email exchange, snipped and forwarded with permission. --Declan
    
    **********
    
    Date: Thu, 19 Apr 2001 13:19:43 -0400
    From: "Eliezer S. Yudkowsky" <sentienceat_private>
    To: Declan McCullagh <declanat_private>
    Subject: Re: My reaction...
    
    [...]
    
    Page title:  "Making HAL your Pal".  Good title.
    
    Page summary:  "What happens when artificial intelligence becomes far
    smarter than humans? What will keep it friendly? The Singularity Institute
    says it as the answers for what happens during the next stage of
    humanity's evolution. By Declan McCullagh."  Great summary.
    
    First sentence:  "Eliezer Yudkowsky has devoted his young life to an
    undeniably unusual pursuit..."
    
    I guess I felt that there was too much about me, and who produced
    "Friendly AI" and why and whether he was a nice person, rather than the
    actual topic of Friendly AI and the Singularity.
    
    I'm young, Declan.  I didn't go to college or even high school.  What I
    rely on, for my credentials, is not the fact that I once got an absurdly
    high SAT score at age eleven; what I rely on is people looking at my
    present-day work and seeing that it is good.  I have no objection to your
    including that quote from whichever AI researcher it was, but I also wish
    you'd gone into at least a little detail on something - said something
    about probabilistic supergoals, for example - so that anyone with a
    knowledge of cognitive science who reads the article can say:  "Hm, that
    sounds like an interesting idea for Friendly AI, I'd like to know a bit
    more about it."  I didn't see anything in the article that would enable
    people to do that.  There are things about my history, and about what
    we're trying to do, and what other people think of it, and what people
    think of the Singularity, and who invented the Singularity, but not
    anything about what the Singularity *is*, or what, *specifically*,
    "Friendly AI" suggests.  Even the words "Friendliness is not imposed, it's
    what the AI *wants* to do", would have been enough to intrigue people -
    show them a little of the future shock, the strangeness of the
    Singularity.
    
    That's why I was enthused by the title and summary, but worried when I
    read the first sentence.  I'd as soon turn grey and faceless and be just
    another author's name on the page then permit my admittedly interesting
    life story to get in the way of SIAI.
    
    Part of all this was simply my inexperience, of course.  In the future,
    for example, I'll be careful to say:  "Any damn fool can design a system
    that will work right if nothing goes wrong.  That's why Friendly AI is
    740K long."
    
    And no, the Singularity Institute doesn't have any code.  We *say* we
    don't have any code.  We plaster that fact all over the place.  We *beg*
    people for the funding we need to start writing code, and in the meantime,
    we do what we can to make the Singularity safer.  And yet people, dammit,
    don't just advise us to start writing code, which is one thing, but
    actually get all *angry* at us for not having code?  How the devil do they
    think code gets written?  By hiding out in the basement until you've
    written a complete working AI?  How would people know we were worth
    funding if we didn't do what we could in the meanwhile?
    
    [...]
    
    Of course I expect a lot of hostile reactions.  The most I can hope for is
    that I can accumulate a bunch of equal and opposite good reactions to use
    to oppose it.  Any crackpot can say things like "Einstein was mocked!
    Drexler was mocked!", and the frequent use by crackpots of that line is
    exactly why I try never to use it.  Still, a hostile reaction by a
    distinguished scientist neither proves nor disproves a theory that would
    be expected to be controversial.  You have to judge by looking at the
    content.
    
    [...]
    
    I forecast a good chance that the goverment will interfere and mess up the
    Singularity.  Remember, Declan, I'm not necessarily doing this because I
    forecast a large probability of success, but because these are the actions
    that lead to the greatest available probability of success.
    
    --              --              --              --              --
    Eliezer S. Yudkowsky                          http://singinst.org/
    Research Fellow, Singularity Institute for Artificial Intelligence
    
    
    
    
    -------------------------------------------------------------------------
    POLITECH -- Declan McCullagh's politics and technology mailing list
    You may redistribute this message freely if it remains intact.
    To subscribe, visit http://www.politechbot.com/info/subscribe.html
    This message is archived at http://www.politechbot.com/
    -------------------------------------------------------------------------
    



    This archive was generated by hypermail 2b30 : Thu Apr 19 2001 - 19:21:32 PDT