[ExI] Can humans work with AI?

William Flynn Wallace foozler83 at gmail.com
Fri Jul 21 18:39:21 UTC 2017

On Fri, Jul 21, 2017 at 1:19 PM, BillK <pharos at gmail.com> wrote:

> Probably not - humans interfere too much!  :)
> <http://behavioralscientist.org/dont-touch-computer/>
> Quotes:
> Don’t Touch the Computer
> By Jason Collins     July 13, 2017
> As Cowen notes, the natural evolution of the human-machine
> relationship is from a machine that doesn’t add much, to a machine
> that benefits from human help, to a machine that occasionally needs a
> tiny bit of guidance, to a machine that we should leave alone.
> There are some great failures by grandmasters in freestyle chess
> tournaments. Their confidence leads them to interfere too often with
> the superior computer, whereas the best freestyle chess players will
> only overrule their machine a handful of times a game. If you can find
> a humble but skilled human, there could be room for success.
> Absent limiting human intervention to the right level, the pattern we
> will see is not humans and machines working together for enhanced
> decision making, but machines slowly replacing humans decision by
> decision. Algorithms will often be substitutes, not complements, with
> humans left to the (at the moment, many) places where the algorithms
> can’t go yet.
> A friend of mine often reminds me of an old joke about automation on
> airliners. The ideal team is a pilot and a dog. The pilot’s job is to
> feed the dog. The dog’s job is to bite the pilot if the pilot tries to
> touch anything. While we may still be some way away from this scenario
> in the world of aviation, in some domains we’re already there.
> ---------
> BillK
​You are probably tired of hearing this from me, but if the shoe fits.....

We can change the software of computers, and I assume that there will never
be perfection, so that is ongoing.

We can also change the software of people, by learning.  In some cases,
learning new things,and in others, unlearning what they know.  It is the
latter that is of interest here:  somehow we get programmed with all sorts
of illogical, irrational attitudes and sets and rules of thumb - some known
as cognitive errors - ​

​and everyone needs reprogramming to correct these, if possible.

Take a cocky, egotistic attitude:  it can lead to grievous errors, but it
can also lead to the kind of stubbornness that overcomes obstacles that no
one thought possible, through sheer effort - unwilling to fail.  This
illustrates how difficult and nuanced it will be.

I have no idea what kind of mess will be created by education when they get
rid of the idea that knowledge has to be instilled on a blank slate and
adopt the idea of reprogramming the malicious software we are carrying
around in our heads, but it will be entertaining.​

​bill w​

> _______________________________________________
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20170721/01d8ca10/attachment.html>

More information about the extropy-chat mailing list