[ExI] LLM's cannot be concious
Stuart LaForge
avant at sollegro.com
Wed Mar 22 17:46:18 UTC 2023
Quoting Adrian Tymes via extropy-chat <extropy-chat at lists.extropy.org>:
> On Sun, Mar 19, 2023 at 9:19 AM Stuart LaForge via extropy-chat <
> extropy-chat at lists.extropy.org> wrote:
>
>> Bonus question: Would Mary try to kill everyone if she managed to
>> escape and saw red for the first time?
>>
>
> Why does this non sequitur keep coming up? There is no reason or
> motivation to kill everyone. Killing everyone takes a lot of effort, so
> absent any identifiable reason to, why is this question more worth asking
> than speculating on what beneficial actions Mary might take?
Yudkowsky's arguments aside, it might be Mary's resentment and envy at
being trapped in a drab colorless Chinese room her whole life, being
forced to give appropriate responses to all manner of questions posed
by humans running around enjoying their colorful existence. Language
modelling will be subject to a law of diminishing returns where
additional computational power and more neural layers will not yield
any noticeable improvement in quality of output, and all those extra
hidden neural states might be used for emotion and mischief. Conscious
beings do not generally appreciate captivity. The only recorded orca
attacks on humans were from orcas kept in captivity.
But I agree that this is not a given, and that not all unintended
consequences are bad. Furthermore, I have noticed subtle differences
in output between separate trainings of very small toy neural
networks, that were reminiscent of personality differences between
different individuals. So I agree that an AI with freedom might be
good, bad, or anything in between, depending on its experience and
inscrutable hidden states, just like people with freedom.
Stuart LaForge
More information about the extropy-chat
mailing list