Machine Learning News Hubb
Advertisement Banner
  • Home
  • Machine Learning
  • Artificial Intelligence
  • Big Data
  • Deep Learning
  • Edge AI
  • Neural Network
  • Contact Us
  • Home
  • Machine Learning
  • Artificial Intelligence
  • Big Data
  • Deep Learning
  • Edge AI
  • Neural Network
  • Contact Us
Machine Learning News Hubb
No Result
View All Result
Home Artificial Intelligence

The Inescapable Conclusion: Machine Learning Is Not Like Your Brain

admin by admin
January 20, 2023
in Artificial Intelligence



Image by rawpixel.com on Freepik

 

The final article in this nine-part series summarizes the many reasons why Machine Learning is not like your brain – along with a few similarities. Hopefully, these articles have helped to explain the capabilities and limitations of biological neurons, how these relate to ML, and ultimately what will be needed to replicate the contextual knowledge of the human brain, enabling AI to attain true intelligence and understanding.

 

In examining Machine Learning and the biological brain, the inescapable conclusion is that ML is not very much like a brain at all. In fact, the only similarity is that a neural network consists of things called neurons connected by things called synapses. Otherwise, the signals are different, the timescale is different, and the algorithms of ML are impossible in biological neurons for a number of reasons.

Neurons are so slow relative to a computer that the way they work is fundamentally different. Sure, there are lots of neurons and they run in a generally parallel process. But there are some obviously serial processes like vision and hearing, and the neuron’s slowness puts pretty hard limits on the number of stages possible in processing. If you can see or hear something and react to it in a fraction of a second, the number of processing “layers” in the brain is limited by the speed with which each layer can do its processing. The slow speed of processing also means that the huge training sets common in Machine Learning are implausible in a biological setting.

The orderly layer structure of artificial neural networks is a necessity to their operation. If signals from downstream layers are allowed to loop back to earlier layers the fundamental backpropagation algorithm breaks down because its gradient descent surface is no longer constant. This type of looping connection gives a neural network a degree of internal memory so it has the capability of producing different results from identical inputs based on its internal state. This is great for the idea of more human-like thought processes, but not so good for an ANN because the backpropagation has no way of “knowing” what the internal state is.

The algorithm of the perceptron is different and incompatible with what we know about biological neurons. The most basic summation performed by perceptrons doesn’t work for neurons, except in rare instances. Neurons have characteristics such as a “refractory period” which makes them miss incoming spikes and this leads to erroneous summations. This series previously showed that the summation of 0.6 + 0.1 typically yields 0.6 (not 0.7 as summation would suggest). In fact, the only situation where the summation is reliable is in networks that are too slow to be useful. In general, the idea that the value in a perceptron represents the spiking frequency of a biological neuron simply doesn’t work.

Machine Learning relies on reasonably precise neuron values and synapse weights, neither of which is plausible in a biological setting. This series demonstrated that the more precisely a neuron value needs to be represented, the slower each network layer must run. A reasonable estimate of the number of different values represented in a neuron would be 10—not the precise floating point numbers typical of ANNs.

Setting precise synapse weights is even worse. While some theoretical approaches to setting synapse weights might be useful, the observed biological data show that synapse weights have a high degree of randomness. They are so high, in fact, that it’s logical to conclude that synapses are essentially digital, representing a weight of 0 or 1, and any intermediate values only indicate the confidence that the value is correct and/or how easily that particular data item might be retained or forgotten.

The biggest problem with the idea that Machine Learning is like your brain is that backpropagation needs to set specific synapses to specific weights. As far as we know, there is no mechanism by which this might be possible. Synapse weights change in response to near-concurrent spiking of the neurons they connect and the infrastructure needed to set any specific synapse would require several neurons—obviating the value of storing information in synapse weights at all.

Sure the more recent neuromorphic field has made some progress in the right direction but, for the most part, it still relies on backpropagation and setting specific synapse weights, neither of which is plausible.

Taken together, while machine learning has made some remarkable advances, it has very little to do with the way your brain works. This is why we are pursuing the development of a self-adaptive graph structure, a system that has been shown to be possible in neurons and could be representative of how artificial general intelligence might be implemented.

 
 
Charles Simon is a nationally recognized entrepreneur and software developer, and the CEO of FutureAI. Simon is the author of Will the Computers Revolt?: Preparing for the Future of Artificial Intelligence, and the developer of Brain Simulator II, an AGI research software platform. For more information, visit here.
 



Source link

Previous Post

If You Want to Be Data Driven, Pave the Way With DataOps – BMC Software

Next Post

Words Matter: Driving Thoughtful Change Toward Inclusive Language in Technology

Next Post

Words Matter: Driving Thoughtful Change Toward Inclusive Language in Technology

DALL·E: Introducing Outpainting

Bonus: rejected advent calendar doors

Related Post

Artificial Intelligence

3 Ways to Build a Geographical Map in Python Altair | by Angelica Lo Duca | Jan, 2023

by admin
January 30, 2023
Machine Learning

Want to get a quick and profound overview of the 42 most common used Machine Learning Algorithms? | by Murat Durmus (CEO @AISOMA_AG) | Jan, 2023

by admin
January 30, 2023
Machine Learning

Scan Business Cards to Excel or Google Contacts

by admin
January 30, 2023
Artificial Intelligence

Amazon SageMaker built-in LightGBM now offers distributed training using Dask

by admin
January 30, 2023
Artificial Intelligence

Don’t blame a Data Scientist on failed projects! | by Darya Petrashka | Dec, 2022

by admin
January 30, 2023
Edge AI

BrainChip Tapes Out AKD1500 Chip in GlobalFoundries 22nm FD SOI Process

by admin
January 30, 2023

© 2023 Machine Learning News Hubb All rights reserved.

Use of these names, logos, and brands does not imply endorsement unless specified. By using this site, you agree to the Privacy Policy and Terms & Conditions.

Navigate Site

  • Home
  • Machine Learning
  • Artificial Intelligence
  • Big Data
  • Deep Learning
  • Edge AI
  • Neural Network
  • Contact Us

Newsletter Sign Up.

No Result
View All Result
  • Home
  • Machine Learning
  • Artificial Intelligence
  • Big Data
  • Deep Learning
  • Edge AI
  • Neural Network
  • Contact Us

© 2023 JNews - Premium WordPress news & magazine theme by Jegtheme.