Dendrites: Why Biological Neurons Are Deep Neural Networks

221,752
0
Published 2023-01-29
Keep exploring at brilliant.org/ArtemKirsanov/
Get started for free, and hurry—the first 200 people get 20% off an annual premium subscription.

My name is Artem, I'm a computational neuroscience student and researcher. In this video we will see why individual neurons essentially function like deep convolutional neural networks, equipped with insane information processing capabilities as well as some of the physiological mechanisms, that account for such computational complexity.

Patreon: www.patreon.com/artemkirsanov
Twitter: twitter.com/ArtemKRSV

OUTLINE:
00:00 Introduction
01:42 - Perceptrons
03:43 - Electrical excitability and action potential
07:12 - Cable theory: passive dendrites
09:03 - Active dendritic properties
12:10 - Human neurons as XOR gates
19:11 - Single neurons as deep neural networks
22:32 - Brilliant
23:57 - Recap and outro

REFERENCES (in no particular order):
1. Bicknell, B. A., Bicknell, B. A. & Häusser, M. A synaptic learning rule for exploiting nonlinear dendritic computation. Neuron (2021) doi:10.1016/j.neuron.2021.09.044.
2. Matthew Larkum. Are dendrites conceptually useful? Neuroscience (2022) doi:10.1016/j.neuroscience.2022.03.008.
3. Polsky, A., Mel, B. W. & Schiller, J. Computational subunits in thin dendrites of pyramidal cells. Nature Neuroscience 7, 621–627 (2004).
4. Tran-Van-Minh, A. et al. Contribution of sublinear and supralinear dendritic integration to neuronal computations. Frontiers in Cellular Neuroscience 9, 67–67 (2015).
5. Gidon, A. et al. Dendritic action potentials and computation in human layer 2/3 cortical neurons. Science 367, 83–87 (2020).
6. London, M. & Häusser, M. DENDRITIC COMPUTATION. Annu. Rev. Neurosci. 28, 503–532 (2005).
7. Branco, T., Clark, B. A. & Häusser, M. Dendritic Discrimination of Temporal Input Sequences in Cortical Neurons. Science 329, 1671–1675 (2010).
8. Stuart, G. J. & Spruston, N. Dendritic integration: 60 years of progress. Nat Neurosci 18, 1713–1721 (2015).
9. Smith, S. L., Smith, I. T., Branco, T. & Häusser, M. Dendritic spikes enhance stimulus selectivity in cortical neurons in vivo. Nature 503, 115–120 (2013).
10. Beniaguev, D., Segev, I. & London, M. Single cortical neurons as deep artificial neural networks. Neuron 109, (2021).
11. Michalikova, M., Remme, M. W. H., Schmitz, D., Schreiber, S. & Kempter, R. Spikelets in pyramidal neurons: generating mechanisms, distinguishing properties, and functional implications. Reviews in the Neurosciences 31, 101–119 (2019).
12. Larkum, M. E., Wu, J., Duverdin, S. A. & Gidon, A. The Guide to Dendritic Spikes of the Mammalian Cortex In Vitro and In Vivo. Neuroscience 489, 15–33 (2022).

CREDITS:
Icons by biorender.com/
Brain 3D models were modeled with Blender software using publicly available BrainGlobe atlases (brainglobe.info/atlas-api)

This video was sponsored by Brilliant

All Comments (21)
  • Neuroscience PhD student here. Having followed this topic, although from a distance, since I am more into molecular neurobiology, I must say you did a very good job of explaining this very interesting topic. I have been in lectures from some of the researchers you included and I have to say I understood way more things from you than I did before. Thank you!
  • I have a PhD from Berkeley in neuro & engineering, currently work in ML, and you break this stuff down really well. I love the digested summary of the paper equating a neuron to a small CNN. Thanks, and keep it up man
  • @mikip3242
    This is crazy-interesting stuff. The whole thing reminds me of the development of the concept of atoms. Atoms were supposedly meant to be the indivisible building blocks of matter from which you could build anything. The greek concept was adapted to what we now call...well.... atoms. The problem is that atoms are not elementary building blocks and as it turns out they are complex systems made of multiple more fundamental particles that can be arranged in many many ways to build many atoms in very different states and configurations. So the real greek concept of "atom" should be applied to fundamental particles of the standard model, and what we call atoms should be renamed. The same goes for neurons. In AI a "neuron" is just the smallest conceivable operational unit that you can use to build more complicated logic systems. Historically we thought that biological neurons were just that, the smallest indivisible element of logic (a simple lightbulb, a binary switch), but then, like when we splitted the "atom", we discovered that in fact a biological neuron is really equivalent to an ensamble of more simple logic gates. So, the actual "atoms of logic", the neurons in neural networks, should not be confused with neurons in biology.
  • @Gluatamat
    In the problem of approximation of the XOR-function, the choice of the activation function not step-func but П-shaped allows us to solve that by just one single neuron. Then the input space will be divided into 3 parts (- + -). The choice of activation function in ANN plays a significant role. In the SIREN family of models, periodic sin cos functions are generally used. And it allows you to encode the input vectors in hierarchy of scales at once
  • I’m a fine arts doctorate with only 6 or so credit hours of formal education in science post-high school. You were able to make this topic at least comprehensible for me. Thanks for the engaging format!
  • @mitski3612
    I'm using my neurons to learn about neurons
  • @lucascsrs2581
    Awesome content. It's always amazing to see someone that can explain advanced topics in terms that even who are not familiar with the field can understand. Fascinating stuff.
  • @gjuhn
    This was fantastic - easy to follow and amazing production quality. I was wondering how you do your animations, but I see you have a video for that too! I want to go through all of them!
  • Your videos are incredibly well made and are always a special joy to watch. Thank you for your high quality content, please keep it up :)
  • @naimneman4216
    that was a really great video! I was engaged every second of it, please keep uploading these topics!
  • Fascinating. Especially the idea that neurons are sensitive to the activation order of signals along its dendrites. I remember seeing a talk long ago where the speaker was discussing what aspects of the brain's (or the neurons') configuration that are actually coded for in our genes, and argued that (a) almost none of the neuron-to-neuron connections or connectivity strengths are controlled (as you may have thought they were, if the brain was a giant network of threshold gates), but (b) great care is taken to control, across a large variation of different neuron cell types, precisely where along the dendrite each nerve cell type connects and in what manner (which makes no sense in a threshold gate model). That seems rather more on-point with the model you're describing!
  • @MrMikkyn
    The all of the animations on this are amazing. I love them. They make neurons look so colourful and exciting, and its really engaging. I'm still a beginner at neuroscience but I really like both topics of neural networks in machine learning and computational neuroscience, so this video is perfect for me. I will definitely rewatch it.
  • @HeduAI
    This content is exceptional! As an AI engineer with limited biology training, I previously felt so overwhelmed by the complex terminology and extraneous information in the realm of neuroscience. Then struck gold by discovering your channel. Thank you!
  • @richtigmann1
    This is so interesting! I love how it combines and compares neuroscience and computer science. The production quality is so high
  • I just cannot explain the fascination, love, and inspiration that this video delivers.
  • @AffectiveApe
    Once again, stellar communication and animation work!
  • @Calvinizo
    I just found this channel out, and watched a few video's fully and I'm absolutely hooked and subbed directly, keep it up man! - The way you explain it is very easy to understand, even for outsiders!
  • @13lacle
    Great video, It's cool that single neurons can act as their own mini deep neural networks. I think that it is worth clarifying that even though the biological neurons are not equivalent to perceptron neurons in function, all this does is move the perceptron to a sub part of the neuron. Meaning that the fundamental principle is the same (ie. a network of sub networks is still a single network on the whole)
  • @keyyyla
    Your animations are crazy. Keep up the great work!