# [ExI] personal

Adrian Tymes atymes at gmail.com
Tue Mar 15 21:14:22 UTC 2016

```On Mar 15, 2016 1:14 PM, "William Flynn Wallace" <foozler83 at gmail.com>
wrote:
> I got as far a nested DO loops but never wrote a complicated program.

So if I told you to write a loop where you start with an array of numbers
and multiply them all together, printing out the result at each step (as
in, print the first number, then the first two multiplied together, them
the first three, and so on until you reach the end of the array), you could
at least picture roughly how to do it, and you wouldn't be totally lost,
right?

> As far as statistics, I can do factor analysis (and don't trust it too
much), analysis of variance, and particularly regression analysis, as well
as all the lower things.  All in a people experiment context.

So let's say we had two conditions, A and B.  We've measured a series of
events, and noted how many times A occurred without B, how many times B
occurred without A, how many times both happened, and how many times
neither happened.  Would you be able to plug in the numbers for Bayes'
theorem, P(A|B) = P(B|A)*P(A)/P(B) ?

If both of those are true, then you have the basic knowledge to understand
how modern AIs work.  If either one is not true, that's a topic you'll need
to study up on.

> So I want to understand AI from a psychologist's point of view.  Are we
trying to teach them how to think like us?  DO they?

First define how we think.

They are coming up with algorithms that lead to useful conclusions.  It is
true that we are capable of following those algorithms too, if far more
slowly (at least when we do so consciously, as opposed to subconscious
things like visual pattern recognition and edge detection) - after all, we
dreamed up those algorithms.

But do those algorithms reflect how we think, day to day/most of the
time/in practice?  That is more the realm of neurobiology than AI.

> People are so fuzzy and complicated and most things are centered around
probability, whereas AI seems so cut and dried and one answer and no
probability.

Ahahaha, no.  AI is largely about probability.

Now, there may often be one answer that is so far more likely than any
other that there may as well be only one...but surely you have encountered
many such situations too.  If you are driving a car and you have entered a
parking spot at your destination, is not the usual best - and practically
only - answer to apply the brakes and stop, even if the answer could
sometimes be to reverse and leave (if you then notice it's a handicapped
spot, or if some bat-wielding parking spot protector then starts yelling
and running at you)?

> But likely it is as complicated as the computer in Pratchett's Discworld
where you get 'OUT OF CHEESE ERROR" that no one understands.

That comes from a different type of programming: the traditional non-AI
where all logic was discovered and implemented by humans...who sometimes
labelled things badly, such as calling a certain type of transitory memory
"cheese" because they were thinking of cheese when they wrote that memory
manager.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20160315/85b1e69f/attachment.html>
```