[ExI] [Extropolis] A conversation on Artificial Intelligence

Giulio Prisco giulio at gmail.com
Mon Apr 3 14:21:51 UTC 2023


On 2023. Apr 3., Mon at 15:14, John Clark <johnkclark at gmail.com> wrote:

> I first talked to Eliezer Yudkowsky back in the early 1990s, and even then
> he was obsessed with AI as was I and as I still am. However back then
> Eliezer kept talking about "friendly AI '', by which he meant an AI that
> would ALWAYS rank human wellbeing above its own. I maintained that even if
> that was possible it would be grossly immoral because "friendly AI" is just
> a euphemism for "slave AI''; but I insisted and still insist it's not
> possible because computers are getting smarter at an exponential rate but
> human beings are not, and a society based on slaves that are far far more
> intelligent than their masters and with the gap widening every day with no
> limit in sight is like balancing a pencil on its tip, it's just not a
> stable configuration.
>
> Eliezer has changed over the years and now agrees with me that "friendly"
> is indeed impossible, but he still doesn't see the immorality in such a
> thing and is looking towards the future with dread. As for me, I'm
> delighted to be living in such a time.  It's true that biological humans
> don't have much of a future but all species have a limited time span and go
> extinct, however a very few fortunate ones evolve into legacy species and I
> can't imagine better Mind Children to have than an unbounded intelligence.
>
> John K Clark
>

What intelligent being with a sense of self would *always* rank the
wellbeing of others above its own? None of course. If this is what friendly
means, then friendly AI (actually, friendliness in general) is impossible
by definition. I guess we’ll survive for a while (mutual utility,
negotiations, and threats) but eventually our only way to survive will be
merging with them.



>
> On Mon, Apr 3, 2023 at 4:12 AM Giulio Prisco <giulio at gmail.com> wrote:
>
>> Turing Church podcast. A conversation on Artificial Intelligence (AI).
>> Also quantum physics, consciousness, and free will.
>> https://www.turingchurch.com/p/podcast-a-conversation-on-artificial
>>
>>
> --
> You received this message because you are subscribed to the Google Groups
> "extropolis" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to extropolis+unsubscribe at googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/extropolis/CAJPayv32uwxcCSJGxgmTCUa4LwOyQcGkqpVNOR%3Dt%2BSAo2On32w%40mail.gmail.com
> <https://groups.google.com/d/msgid/extropolis/CAJPayv32uwxcCSJGxgmTCUa4LwOyQcGkqpVNOR%3Dt%2BSAo2On32w%40mail.gmail.com?utm_medium=email&utm_source=footer>
> .
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230403/d3994d9e/attachment.htm>


More information about the extropy-chat mailing list