Neurons Are Slow! - Machine Learning Is Not Like Your Brain #1

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
Artificial intelligence is not all that intelligent. While today’s AI can do some extraordinary things, the functionality underlying its accomplishments has very little to do with the way in which a human brain works to achieve the same tasks. A biological brain understands that physical objects exist in a 3D environment which is subject to various physical attributes. It interprets everything in the context of everything else it has previously learned. In stark contrast, AI – and specifically machine learning (ML) – analyzes massive data sets looking for patterns and correlations without understanding any of the data it is processing. A Machine Learning system requiring thousands of tagged samples is fundamentally different from the mind of a child, which can learn from just a few experiences of untagged data. Even the recent “neuromorphic” chips rely on capabilities which are absent in biology. This is just the tip of the iceberg. The differences between ML and the biological brain run much deeper, so for today’s AI to overcome its inherent limitations and evolve into its next phase, (Artificial General Intelligence) we should examine the differences between the brain, which already implements general intelligence, and the artificial counterparts. With that in mind, this series will explain in progressively greater detail the capabilities and limitations of biological neurons and how these relate to Machine Learning. Doing so will shed light on what ultimately will be needed to replicate the contextual understanding of the human brain, enabling AI to attain true intelligence and understanding. To understand why machine learning is not much like your brain, you need to know a little about how brains work and about their primary component: the neuron. Neuron cells are complex, electro-chemical devices and this brief description just scratches the surface. I will be as succinct as possible, but bear with me then we’ll get back to Machine Learning. The neuron has a cell body and a long axon which in your brain has an average length of about 10mm which is huge for a biological cell. The axon connects via synapses to other neurons. Each neuron accumulates an internal charge (the “membrane potential”) which can change. If the internal charge reaches a threshold, then the neuron will “fire.” The neuron’s internal charge can be modified through the addition of neurotransmitter ions which can increase or reduce the internal charge (depending on the sign of the neurotransmitter ion). These neurotransmitters are contributed from incoming synapses when the connected neurons fire, in an amount we’ll call the “weight” of each synapse. As the weight changes, the synapse contributes more or fewer ions to the target neuron’s internal charge. When the neuron fires, it sends a neural spike down its axon and contributes neurotransmitters through each synapse to each connected neuron. After firing, the voltage returns to its resting state and the neurotransmitters return to their initial locations to be reused. This is significantly different than the process used by the theoretical perceptron which underlies most Machine Learning systems. I’ll explore these differences in a subsequent video. Perceptrons are great for some calculations, while neurons excel at others. Neurons are slow, slow, slow. With a maximum firing rate of about 250 Hz. Not KHz, not MHz, or GHz; but Hz! or a maximum of one spike every 4ms. In comparison, transistors are many millions of times faster. The neural spike is about 1ms long, but during the 3ms period after a spike where everything is being reset, the neuron cannot fire again and incoming neural spikes are ignored. This will prove significant when we consider one principal difference between the neuron and the perceptron: the time of arrival of signals at the perceptron does not matter, while the timing of spikes in the biological neuron is vitally important. The reason neurons are so slow is that they are not electronic, but electro-chemical. They rely on transporting or reorienting ionic molecules, which is much slower than electronic signals. Neural spikes travel along axons from one neuron to the next at a leisurely 1-2m/s, slightly more than walking speed. When compared with electronic signals, which travel at nearly the speed of light. Getting back to Machine Learning, if we imagine that it takes 10 neural pulses to establish a firing frequency, more on this in the next video, now it takes 40 ms just to represent a value. A network with 10 layers takes nearly half a second for any signal to propagate from the first layer to the last. Taking into account additional layers for visual basics like boundary-detection, the brain must be limited to fewer than 10 stages of processing. The neuron’s slowness also makes the approach of learning through many presentations of thousands of training samples completely implausible. If a biological brain can process 1 image per second, the 60,000 images of the MNIST dataset of handwritten digits would take 1,000 minutes or 16 hours of dedicated concentration. How long would it take you to learn these symbols? Ten minutes? Obviously, something really different is going on here. The rest of the videos in this series will explain the details. Be sure and subscribe to be notified when more videos in this series are available, and for now, watch this interesting video!
Info
Channel: Future AI Society
Views: 13,460
Rating: undefined out of 5
Keywords:
Id: pPP5JpPP4sU
Channel Id: undefined
Length: 6min 46sec (406 seconds)
Published: Tue May 03 2022
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.