The Inspiration for Artificial Neural Networks

By Larry Liang, CTO at InetSoft

In this article, continuing our introduction to machine learning, I am going to write a little bit about real neurons and the real brain which provide the inspiration for the artificial neural networks that we are striving to learn about in this series of articles. In most of the discussion, we won't talk much about real neurons, but I wanted to give you a quick overview at the beginning.

There are several different reasons to study how networks of neurons can compute things. The first is to understand how the brain actually works. You might think we could do just by experiments on the brain, but it's very big and complicated, and it dies when you poke around it too much.

And so we need to use computer simulations to help us understand what we are discovering in empirical studies. The second reason is to understand a style of parallel computation that's inspired by the fact that the brain can compute with a big parallel network available from real neurons. If we can understand that style of parallel computation we might be able to make better parallel computers.

#1 Ranking: Read how InetSoft was rated #1 for user adoption in G2's user survey-based index Read More

It's very different from the way computation is done on a conventional serial processor. It should be very good for things that brains are good at like vision, and it should also be bad for things that brains are bad at like multiplying two numbers together. The third reason, which is the relevant one for this discussion, is to solve practical problems by using novel learning algorithms that were inspired by the brain.

These algorithms can be very useful even if they are not actually how the brain works. So in most of this discussion, we won't talk much about how the brain actually works. It's just used as a source of inspiration to tell us that the big parallel networks of neurons can compute very complicated things.

So, onwards about how the brain actually works. A typical cortical neuron has a gross physical structure that consists of a cell body and an axon where it sends messages to other neurons and the dendritic tree where it receives messages from other neurons. Wherein an axon from one neuron contacts the dendritic tree of another neuron there is a structure called the synapse.

A spike of activity traveling along the axon causes charge to be injected into the post-synaptic neuron at the synapse. A neuron generates a spike when it has received enough charge in its dendritic tree to depolarize a part of the cell body called the axon hillock, and when that gets depolarized the neuron sends a spike on the axon, and the spikes adjust the wave of the depolarization that travels along the axon.

Synapses themselves have interesting structure. They contained little vesicles of transmitter chemical, and when a spike arrives in the axon, it causes these vesicles to migrate to the surface and be released into the synaptic cleft. There are several different kinds of transmitter chemicals. There are ones that implement positive weights, and ones that implement negative weights.

data intelligence
Learn how InetSoft's data intelligence technology is central to delivering efficient business intelligence.

The transmitter molecules diffuse across the synaptic cleft and bind to receptor molecules in the membrane of the post-synaptic neuron, and by binding to big molecules in the membrane they change their shape, and that creates holes in the membrane. These holes allow specific ions to flow in or out of the post-synaptic neuron, and that changes their state of depolarization.

Synapses adapt, and that's what most of learning is, changing the effectiveness of the synapse. They can adapt by varying the number of vesicles that get released when the spike arrives or by varying the number of receptor molecules that are sensitive to the released transmitter molecules.

Synapses are very slow compared with computer memory, but they have a lot of advantages over the random access memory on a computer. They are very small and very low-power, and they can adapt. That's the most important property. They use locally available signals to change their strengths, and that's how we learn to perform complicated computations. The issue, of course, is how do they decide how to change their strength? What are the rules for how they should adapt?

Each neuron receives input from other neurons. A few of the neurons receive input from the receptors. It's a large number of neurons, but only a small fraction of them. And the neurons communicate with each other in the cortex by sending these spikes of activity.

The effect of an input line on the neuron is controlled by synaptic weight which can be positive or negative, and synaptic weights adapt and by adapting these weights, the whole network learns to perform different kinds of computation, for example, recognizing objects, understanding language, making plans, controlling the movements of your body.

Read more about InetSoft's in-memory database technology to learn how it works and what its advantages are over a pure in-memory solution.

You have about 10 raised to the 11th power neurons, each of which has about 10 to the 4th weights. So you probably have 10 to the 15th or maybe 10 o the 14th synaptic weights, and a huge number of these weights, quite a large fraction of them, can affect the ongoing computation in a very small fraction of the second, in a few milliseconds. That is much better bandwidth to stored knowledge than even a modern workstation has.

One final point about the brain is that the cortex is modular, or at least it learns to be modular. Different bits of the cortex end up doing different things. Genetically, the inputs from the senses go to the different bits of the cortex, and that determines a lot about what they end up doing. If you damage the brain of an adult, local damage to the brain causes specific effects.

Damage to one place might cause you to lose your ability to understand language, and damage to another place might cause you to lose your ability to recognize objects. We know a lot about how functions are located in the brain because when you use a part of the brain for doing something it requires energy.

As it demands more blood flow, you can see the blood flow in a brain scanner. That allows you to see which bits of the brain you are using for particular tasks. But the remarkable thing about cortex is it looks pretty much the same all over, and that strongly suggests that it's got a fairly flexible universal learning algorithm in it. That's also suggested by the fact that if you damage the brain early on functions will relocate to other parts of the brain. So it's not genetically predetermined, at least not directly, which part of the brain will perform which function.

Read more about InetSoft's data mashup technology to learn how it makes BI asset delivery and maintenance more efficient than traditional BI solutions.

There are convincing experiments on baby ferrets that show that if you cut off the input to the auditory cortex that comes from the ears and instead re-route the visual input to the auditory cortex, then the auditory cortex that was designed to deal with sounds will actually learn to deal with visual input and create neurons that are very like the neurons in the visual system.

This suggests that the cortex is made of general purpose stuff that has the ability to turn into special purpose hardware for particular tasks in response to experience, and that gives you a nice combination of rapid parallel computation. Once you have learning plus flexibility so you can learn new functions, so you are learning to do the parallel computation.

It's quite like where you build some standard parallel hardware and then after it's built, you are inputting information that tells it what particular parallel computation to do. Conventional computers get their flexibility by having a stored sequential program, but this requires very fast central processors to access the lines in the sequential program and perform long sequential computations.

Previous:Setting for Applying Machine Learning Next: Simple Models of Neurons