[ExI] Eliezer Yudkowsky New Interview - 20 Feb 2023

Gadersd gadersd at gmail.com
Thu Mar 2 00:26:26 UTC 2023


>> Just because humans set their own goals doesn't mean AIs will have that ability. Just because we have wants and needs doesn't mean AIs will have them.

Our current AI’s are black boxes. Their internal workings are a mystery. These systems could harbor goals that we are oblivious to. If we could prove that the system only has the goal of giving benign advice without any personal agenda that would help, but we do not know how to do that even in theory. Even a system that only gives advice is extremely dangerous as any psycho could potentially get detailed instructions on how to end the world. It could be as trivial as having the AI design a super virus. Our current filters are very fallible and we do not know how to definitively prevent AI from giving harmful advice. We are heading toward a field of landmines.

> On Feb 28, 2023, at 12:25 PM, Dave S via extropy-chat <extropy-chat at lists.extropy.org> wrote:
> 
> On Tuesday, February 28th, 2023 at 11:14 AM, Gadersd via extropy-chat <extropy-chat at lists.extropy.org> wrote:
> 
>> >>Why would you ask a super intelligent AI with solving goals rather than asking it how the goals could be achieved?
>> 
>> A super intelligence wouldn’t need to be “asked.” Try caging something 1000x smarter than yourself. You had better hope its goals are aligned with yours.
> 
> As I said, the verb should have been "task". If I ask Super AI "How would you do X?", I don't expect it to do X. And I don't expect it to do anything without permission.
> 
> I have no idea what 1000x smarter means. An AI can be as smart as a person--or even smarter--without having the ability to set its own goals. Just because humans set their own goals doesn't mean AIs will have that ability. Just because we have wants and needs doesn't mean AIs will have them.
> 
>> >>Why would you give a super intelligent AI the unchecked power to do potentially catastrophic things?
>> 
>> Because it’s profitable to give AI the authority to perform tasks traditionally done by humans. A super intelligence can potentially do quite a lot of harm with relatively little authority. A super intelligent hacker only needs to find a basic software bug to gain access to the internet and imagine what might happen next.
> 
> Something can be profitable without being a good idea. AIs should be our tools, not independent beings competing with us.
> 
> -Dave
> _______________________________________________
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230301/4ab5e749/attachment.htm>


More information about the extropy-chat mailing list