[ExI] Eliezer Yudkowsky New Interview - 20 Feb 2023

Dave S snapbag at proton.me
Tue Feb 28 17:25:31 UTC 2023


On Tuesday, February 28th, 2023 at 11:14 AM, Gadersd via extropy-chat <extropy-chat at lists.extropy.org> wrote:

>>>Why would you ask a super intelligent AI with solving goals rather than asking it how the goals could be achieved?
>
> A super intelligence wouldn’t need to be “asked.” Try caging something 1000x smarter than yourself. You had better hope its goals are aligned with yours.

As I said, the verb should have been "task". If I ask Super AI "How would you do X?", I don't expect it to do X. And I don't expect it to do anything without permission.

I have no idea what 1000x smarter means. An AI can be as smart as a person--or even smarter--without having the ability to set its own goals. Just because humans set their own goals doesn't mean AIs will have that ability. Just because we have wants and needs doesn't mean AIs will have them.

>>>Why would you give a super intelligent AI the unchecked power to do potentially catastrophic things?
>
> Because it’s profitable to give AI the authority to perform tasks traditionally done by humans. A super intelligence can potentially do quite a lot of harm with relatively little authority. A super intelligent hacker only needs to find a basic software bug to gain access to the internet and imagine what might happen next.

Something can be profitable without being a good idea. AIs should be our tools, not independent beings competing with us.

-Dave
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230228/bab4b267/attachment.htm>


More information about the extropy-chat mailing list