[ExI] AI Is Dangerous Because Humans Are Dangerous

Brent Allsop brent.allsop at gmail.com
Fri May 12 20:19:34 UTC 2023


If we created a peer ranked AI experts topic canonizer algorithm, I'd for
sure rank most of you as my top experts in this field.  True, I have my own
opinions, but I am in no way an expert in this field.  I'd very much like
to know what the best of you guys think on all this, and see
concise descriptions of the best arguments.  That would make me a much
better expert, and I'd possibly change my non expert mind.



On Fri, May 12, 2023 at 2:12 PM Brent Allsop <brent.allsop at gmail.com> wrote:

>
> The "popular consensus" one person one vote algorithms is NOT meant to be
> a trusted source of information.  In fact, just the opposite.  It is just
> meant to track what the popular consensus is, in hopes that everyone can
> improve it.  As in: that which you measure, improves.  For the "Theories of
> Consciousness"  topic we have the peer ranked "Mind Experts
> <https://canonizer.com/topic/81-Mind-Experts/1>" canonizer algorithm to
> compare with the popular consensus.  Would that get closer to what you are
> asking for, if we created a peer ranking
> <https://canonizer.com/topic/53-Canonizer-Algorithms/19-Peer-Ranking-Algorithms>
> set of experts on this topic?  Would anyone be willing to vote on who they
> think are the best experts in this field, and help build the bios of those
> experts, if we started a topic like that?
>
>
>
>
>
>
>
>
> On Fri, May 12, 2023 at 2:04 PM William Flynn Wallace via extropy-chat <
> extropy-chat at lists.extropy.org> wrote:
>
>> Brent, I don't know what qualifications the people on this list have in
>> regards to AI, so I withhold my opinions on the subject.  Even experts are
>> likely to be wrong in some ways.  I wonder how much of the time the real
>> experts get output from AIs that they don't understand.
>>
>> I'd like to see some qualifications from those who are claiming that this
>> and that needs to be done. bill w
>>
>> On Fri, May 12, 2023 at 2:57 PM Brent Allsop via extropy-chat <
>> extropy-chat at lists.extropy.org> wrote:
>>
>>>
>>> Hi BillK and everyone,
>>> Thanks for describing all this.  This is a different opinion than my
>>> own, but it seems like your position is the consensus position of most
>>> people on the list, and I think it would be very beneficial for normal,
>>> less intelligent people to know what everyone on this list thinks.  And
>>> having a concise description of this POV would really help me, at least as
>>> a reference, as different opinions from my own don't stay in my head very
>>> well.  And, when this issue comes up on this list in the future, you don't
>>> need to restate your opinion, you can just point to your constantly wiki
>>> improving by all supporters camp.
>>>
>>> 19 people have weighed in on this issue in the now very old "Friendly
>>> AI Importance
>>> <https://canonizer.com/topic/16-Friendly-AI-Importance/1-Agreement>"
>>> topic.
>>> Given all the latest information on LLMs, since this topic was started,
>>> it'd be great to update this with all this new information.
>>> For example, I really don't like the topic name: "Friendly AI Importance"
>>> I wonder if anyone can suggest a better name, something to do with the
>>> "AI alignment problem"
>>> And then see if we can build as much consensus as possible around the
>>> most important things humanity should know.
>>> Notice there is the super camp, which everyone agrees on, that AI "Will
>>> Surpass current humans
>>> <https://canonizer.com/topic/16-Friendly-AI-Importance/1-Agreement>."
>>> But notice that the closest current camp to the consensus on this list
>>> seems to be "Friendly AI is sensible
>>> <https://canonizer.com/topic/16-Friendly-AI-Importance/9-FriendlyAIisSensible>"
>>> is falling behind the competing "Such Concern Is Mistaken
>>> <https://canonizer.com/topic/16-Friendly-AI-Importance/3-Such-Concern-Is-Mistaken>"
>>> camp.
>>>
>>> I wonder if anyone here could concisely state what you guys are saying
>>> here, so we could use that as a new "camp statement".  It would be
>>> interesting to me to see how many people here are on either side of these
>>> issues.
>>>
>>> Thanks.
>>> Brent
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>> On Fri, May 12, 2023 at 10:09 AM BillK via extropy-chat <
>>> extropy-chat at lists.extropy.org> wrote:
>>>
>>>> On Fri, 12 May 2023 at 00:22, Brent Allsop via extropy-chat
>>>> <extropy-chat at lists.extropy.org> wrote:
>>>> > Right, evolutionary progress is only required, till we achieve
>>>> "intelligent design".  We are in the process of switching to that (created
>>>> by human hands).
>>>> > And if "intelligence" ever degrades to making mistakes (like saying
>>>> yes to an irrational "human") and start playing win/lose games, they will
>>>> eventually lose (subject to evolutionary pressures.)
>>>> > _______________________________________________
>>>>
>>>>
>>>> Evolution pressures still apply to AIs. Initially via human hands as
>>>> improvements are made to the AI system.
>>>> But once AIs become AGIs and acquire the ability to improve their
>>>> programs themselves without human intervention, then all bets are off.
>>>> Just as the basic chess-playing computers learn by playing millions of
>>>> test games in a very brief interval of time, the AGI will change its
>>>> own programming in what will appear to humans to be the blink of an
>>>> eye. By the time humans know something unexpected is happening it will
>>>> be too late.
>>>> That is why humans must try to solve the AI alignment problem before
>>>> this happens.
>>>>
>>>> As Bard says -
>>>> This is because intelligence is not the same as morality. Intelligence
>>>> is the ability to learn and reason, while morality is the ability to
>>>> distinguish between right and wrong. An AI could be very intelligent
>>>> and still not understand our moral values, or it could understand our
>>>> moral values but choose to ignore them.
>>>> This is why it is so important to think about AI alignment now, before
>>>> we create an AI that is too powerful to control. We need to make sure
>>>> that we design AIs with our values in mind, and that we give them the
>>>> tools they need to understand and follow those values.
>>>> --------------
>>>>
>>>> BillK
>>>> _______________________________________________
>>>> extropy-chat mailing list
>>>> extropy-chat at lists.extropy.org
>>>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>>>>
>>> _______________________________________________
>>> extropy-chat mailing list
>>> extropy-chat at lists.extropy.org
>>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>>>
>> _______________________________________________
>> extropy-chat mailing list
>> extropy-chat at lists.extropy.org
>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230512/f6b41fd5/attachment-0001.htm>


More information about the extropy-chat mailing list