[ExI] Google’s Go Victory Is Just a Glimpse of How Powerful AI Will Be

Mike Dougherty msd001 at gmail.com
Fri Feb 5 20:08:39 UTC 2016

On Thu, Feb 4, 2016 at 12:53 PM, William Flynn Wallace
<foozler83 at gmail.com> wrote:
> Yeah, I guess that was ambiguous.  Of course I am biased, being a social
> psychologist, but many fields overlap psych:  marketing, management, AI,
> economics and so forth.  If someone is going to program robots to converse
> with people, we need to know more about people than we do, esp. if the robot
> is supposed to detect deception, hidden meanings, emotional reactions, and
> other subtleties
> . Enormous progress has been made in reading faces, for example, that would
> need to be programmed into robots or AI or whatever broad term applies.
> Ditto for research showing how vocalizations can be read for stress and
> other emotions.  Lie detection is getting sophisticated.

I agree.  I think it's an important lesson for non-AI developers to
learn too.  As a non-AI software developer, I am disgusted with the
state of human computer interaction (HCI) that the majority of devs
produce.  There is such a long history of forcing people to adapt to
whatever new software/UI is perceived (by the developer) as "better"
(without regard for users' muscle-memory)

Why can't we use the latest version of Excel with a UI skin having the
good-ol' retro Excel '95 (or 2005, or 2010, or whatever it was when it
was learned).  Sure there might be some new features that weren't
available then, but the basics of spreadsheets hasn't changed since
Lotus 1-2-3 (which was modeled on pre-industrial bookkeeping
practices, right?)  To suggest that everything we know about using a
menu should be trashed so we can click on a ribbon (in the name of
'easier') is to miss a fundamental UX principle.  The "Cool new stuff"
menu might be a place to look... or the application (and computers in
general) have enough resources to know what you're doing and offer
context-aware modes for working through most of the heavy lifting.

*shrug* i digress.  The point was that computers need to get better at
working with people rather than people getting better at working with
computers.  The people who help computers do that need to understand
the people that would eventually be helped by the AI they create.
"People will adapt" should be an unacceptable attitude.

More information about the extropy-chat mailing list