[ExI] AI thoughts

Keith Henson hkeithhenson at gmail.com
Wed Nov 22 21:41:10 UTC 2023


On Wed, Nov 22, 2023 at 8:10 AM Jason Resch via extropy-chat
<extropy-chat at lists.extropy.org> wrote:
>
> On Tue, Nov 21, 2023, 7:25 PM Keith Henson <hkeithhenson at gmail.com> wrote:
>>
snip
>>
>> Can you make a case that it would be worse than the current situation?
>
> I don't believe it will, but if tasked to make the case, I would say the greatest present danger is that it amplifies the agency of any user. So that ill-willed people might become more destructively capable than they otherwise would (e.g. the common example of a lone terrorist leveraging the AI's expertise in biotechnology to make a new pathogen)

We are fortunate that competence and destructive tendencies seem to be
anti-correlated.  But concerning new pathogens, spillovers, and
evolution have done the job without (as far as I know) human
intervention.

> but the Internet has already done this to a lesser extent. I think agency amplification applies to everyone, and since there are more good-intentioned people than ill-intentioned ones, any generally-available amplification technology tends to be a net positive for humanity.
>
So far that seems to be the case.

If what we see at Tabby's Star is aliens, they got through the run-up
to the singularity.  Perhaps we can do it as well.

Keith
> Jason
> _______________________________________________
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat



More information about the extropy-chat mailing list