[ExI] Toy brain can count to five

Stuart LaForge avant at sollegro.com
Wed Jun 12 01:02:17 UTC 2019


In order to get a feel for the capabilities of machine intelligence, I  
have been playing around with a neural network platform called Simbrain.

Simbrain is an awesome free software that lets you play with  
encapsulated graphical neurons on your computer screen. You can create  
neurons with a push of a button and wire them together using your  
mouse. It's like a lego set or tinker toys for AI and machine  
learning, relatively intuitive to use, and requiring very little in  
the way of coding or math to get up and running designing toy brains.

It is written in Java so it is platform independent and it is  
downloadable here:

https://www.simbrain.net/

Ok, so being inspired with all the research being done on the numeracy  
and math skills of honeybees, I designed a 55 neuron brain as a  
network with 5 input neurons, 45 neurons in 3 hidden layers, and an  
output layer with 5 neurons labelled "1" through "5". So then I set  
about seeing if I could use the back propagation algorithm to train my  
tiny brain to count to five.

Basically I feed it a training set consisting of examples of all the  
possible ways that the 5 input neurons can be lit up. For example, I  
input all 5 ways that 1 input neuron out of 5 input neurons can be lit  
up, i.e. the 1st neuron can be lit or the 2nd or the 3rd etc. Then I  
correspond it to the output neuron labelled with the numeral "1". I do  
the same thing for all the ways that 2 out of 5 input neurons can be  
lit up and associate those with the output neuron labelled "2". I do  
the same thing all the way up to all 5 input neurons lit up. (There  
are 10 ways to make 2 out 5 neurons light up, 10 ways to make 3 out of  
5 neurons light up, 5 ways to make 4 neurons light up and just 1 way  
to make 5 out 5 input neurons light up.)

So then I use back propagation to train the neurons for about 20,000  
iterations, randomizing the weights and activations a few times to  
escape from local minima to get down to an error rate below 1%.

And then I tested it and it worked like a charm. It could seemingly  
associate the reality of between one and five objects, in this  
specific case activated input neurons, and with the specific output  
neuron that symbolized that specific number. Sort of like training a  
child to point to the numeral representing the number of blocks you  
had set down before him or her.

So that was mildly interesting but not particularly impressive  
because, I had literally gave it every possible way to make those  
input neurons light up and the enumerated output neurons those  
patterns corresponded to. To program a computer to do that would be  
trivial. My tiny brain could simply be memorizing data without  
actually thinking about it or understanding the concept, like a simple  
lookup table.

So then, I went through my training data and pruned away various  
patterns so that I could test the brain against patterns it had never  
seen before. And I trained my brain with this pared down data and then  
tested it against patterns of lit up input neurons it hadn't been  
trained on. Nonetheless it did quite well at counting the activated  
neurons in patterns it hadn't seen before all the way down to almost  
half of the original training set. Taking out 4 of the patterns where  
2 input neurons out of 5 were activated, reduced the activation of the  
neuron which represented "2" but it nonetheless got the right answer.  
The fewer the training examples of a given number, the less able the  
toy brain was able to figure out what the number was.

For example, if I removed the one way that all 5 input neurons could  
be lit up at once from the training set, upon presenting it with that  
pattern of all 5 being lit up, it did not even try to infer that that  
novel pattern could be the only previously unused output neuron (the  
one labelled "5"). Instead it lit up none of the output neurons, thus  
declining to answer.

So my conclusion is that neural networks are pretty damn good at  
deductive reasoning and interpolation but pretty damn lousy at  
induction and extrapolation.

If anybody downloads the software and wants a copy of my toy brain to  
play with, then email me offlist and I will send it as a zip file.

Stuart LaForge





More information about the extropy-chat mailing list