[ExI] personal

William Flynn Wallace foozler83 at gmail.com
Tue Mar 15 22:52:21 UTC 2016


The statistics one would be a lot easier than remembering the programming,
but yes, I could do both or at the very least understand it.   I am OK with
Bayes' theorem and conditional probability.    I am not sure what several
of you mean by saying that you are a Bayesian.

Human probabilities change instant by instant, like seeing parking place
taken and looking around for another.  I see the AI as coming up with one
answer then taking that action.  I have a problem seeing an AI dithering
among several actions that have nearly equal probability and those
probabilities are changing very rapidly.  Are there neurotic AIs?

First define how we think.
All I know is what is in the cognitive psychology texts and what I can
glean from the neuropsychology findings as to what does what, what it is
connected to, what happens if there is damage there, what happens when you
stimulate it, and so on. (That is, I read Descartes Error and the like and
can follow them).  But what I think about it is that most of what we call
thinking is totally unconscious and some of it, maybe most, is not ever
available to the conscious mind unless the unconscious shoves it up to the
conscious;  that latter is what I call insight or intuition, or just simply
a memory.  If someone says "I did not know what I thought about that until
I said it.", or "I did not know that I knew that " (like coming up with
some answer in Trivial Pursuit) I believe them.  We know very, very little
about these aspects of the unconscious.  It might be that if we made these
processes conscious we could not do them.  Our unconscious knows how - we
don't.

Some people can do fantastic things:  some Russian woman can take the 17th
root of a 24 digit number in her head in a minute or so.  She cannot tell
us how she does it.  Maybe you have to be able to do it to understand that.

Who of you is actually working in the field of AI in some way, and who is a
hobbyist (not part of your job)  at it?

bill w

On Tue, Mar 15, 2016 at 4:14 PM, Adrian Tymes <atymes at gmail.com> wrote:

> On Mar 15, 2016 1:14 PM, "William Flynn Wallace" <foozler83 at gmail.com>
> wrote:
> > I got as far a nested DO loops but never wrote a complicated program.
>
> So if I told you to write a loop where you start with an array of numbers
> and multiply them all together, printing out the result at each step (as
> in, print the first number, then the first two multiplied together, them
> the first three, and so on until you reach the end of the array), you could
> at least picture roughly how to do it, and you wouldn't be totally lost,
> right?
>
> > As far as statistics, I can do factor analysis (and don't trust it too
> much), analysis of variance, and particularly regression analysis, as well
> as all the lower things.  All in a people experiment context.
>
> So let's say we had two conditions, A and B.  We've measured a series of
> events, and noted how many times A occurred without B, how many times B
> occurred without A, how many times both happened, and how many times
> neither happened.  Would you be able to plug in the numbers for Bayes'
> theorem, P(A|B) = P(B|A)*P(A)/P(B) ?
>
> If both of those are true, then you have the basic knowledge to understand
> how modern AIs work.  If either one is not true, that's a topic you'll need
> to study up on.
>
> > So I want to understand AI from a psychologist's point of view.  Are we
> trying to teach them how to think like us?  DO they?
>
> First define how we think.
>
> They are coming up with algorithms that lead to useful conclusions.  It is
> true that we are capable of following those algorithms too, if far more
> slowly (at least when we do so consciously, as opposed to subconscious
> things like visual pattern recognition and edge detection) - after all, we
> dreamed up those algorithms.
>
> But do those algorithms reflect how we think, day to day/most of the
> time/in practice?  That is more the realm of neurobiology than AI.
>
> > People are so fuzzy and complicated and most things are centered around
> probability, whereas AI seems so cut and dried and one answer and no
> probability.
>
> Ahahaha, no.  AI is largely about probability.
>
> Now, there may often be one answer that is so far more likely than any
> other that there may as well be only one...but surely you have encountered
> many such situations too.  If you are driving a car and you have entered a
> parking spot at your destination, is not the usual best - and practically
> only - answer to apply the brakes and stop, even if the answer could
> sometimes be to reverse and leave (if you then notice it's a handicapped
> spot, or if some bat-wielding parking spot protector then starts yelling
> and running at you)?
>
> > But likely it is as complicated as the computer in Pratchett's Discworld
> where you get 'OUT OF CHEESE ERROR" that no one understands.
>
> That comes from a different type of programming: the traditional non-AI
> where all logic was discovered and implemented by humans...who sometimes
> labelled things badly, such as calling a certain type of transitory memory
> "cheese" because they were thinking of cheese when they wrote that memory
> manager.
>
> _______________________________________________
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20160315/b717868a/attachment.html>


More information about the extropy-chat mailing list