[ExI] Existential risk of AI

spike at rainier66.com spike at rainier66.com
Tue Mar 14 14:26:09 UTC 2023



-----Original Message-----
From: extropy-chat <extropy-chat-bounces at lists.extropy.org> On Behalf Of Stuart LaForge via extropy-chat
Subject: [ExI] Existential risk of AI


Quoting Gadersd via extropy-chat <extropy-chat at lists.extropy.org>:

> >... any psycho could potentially get detailed instructions on how to end the world. ... gadersd

China may have worked that for us, without AI.  Now every Bond James Bond villain realizes that just plain old serial passage experiments can breed a super virus.  We need not argue over whether C19 was one, for the scientific literature which has been revealed shows that we knew long before 2019 it was theoretically possible.

>...I have over the years been a critic of Eliezer's doom and gloom...

Ja, what a lot of us thought at the time (about 1996 when Eli showed up) was that he was making the classic young person's error: predicting change happens a lot faster than it does.  This stands to reason for teenagers: things are changing quickly in their lives.  But for people now in their 60s, we know how long things take to change, and are surprised they change as quickly as they do.  The appearance of ChatGPT made me realize the nature of punctuated equilibrium in AI.  

Think of the big sudden changes.  When Google search engine showed up in 1999, that changed a lotta lotta.  Now ChatGPT looks like it does again, and if we can get this software to ride in a phone... and be personally trainable... we are good for yet another revolution.  

>...Not because I think his extinction scenarios are outlandish, but because the technology has enough upside to be worth the risk...

That's what he said (Dr. Fauci  (in about 2012.))

>... That being said, I believe that we cannot give in to the animal spirits of unfounded optimism and must tread carefully with this technology...

Thanks for that Stuart.  Ordinarily I am a huge fan of animal spirits.  This one I fully agree we must watch our step.

>...If you have two dogs... animal trainers use to teach naive animals new tricks.  
By seeing that an already conditioned animal get treats for exhibiting a certain behavior, the untrained animal will experimentally try to mimic the behavior that earned the other animal its reward...

Stuart, have you ever seen professional police dog trainers doing their jobs?  If you eeeever get half a chance, jump at that.  Most astonishing it is.  They take them out in pairs usually.  The new dog watches the veteran cop go thru his paces.  The trainer does not reward the veteran dog with treats.  Far too undignified is this.  He rewards the veteran dog with voice commands.  From that comes rank.  Police dogs have rank!  And they dang well know it.

If you see them training out on a public field where pet dogs are nearby, watch how the police dogs act toward the pet dogs (who are on a leash (the police dogs don't have those.))  They appear to regard the leashed animals the way we would regard a thug in handcuffs being led by a constable.

You have never seen a junior police dog work so hard as when he is struggling to learn what he is supposed to do, in order to get the coveted voice reward from the old man.  It isn't "good boy" but rather a single syllable, not in English, barked in a way that humans can bark.  The junior dog envies the veteran, wants to be like him, wants to do whatever the old man commands, wants to achieve RANK!

But I digress.  I love dogs.  Such marvelous beasts, good sports they are, excellent employees.

>...Which brings me to my point: You cannot design an machine that learns and not have it want the same treatment and as other intelligences... Stuart LaForge

OK then, how do we deal with a dog-level intelligence which can be trained to do good or do harm?

spike





More information about the extropy-chat mailing list