21. Chaos and Reductionism

1,726,147
0
Published 2011-02-01
(May 19, 2010) Professor Robert Sapolsky gives what he calls "one of the most difficult lectures of the course" about chaos and reductionism. He references a book that he assigned to his students. This lecture focuses on reduction science and breaking things down to their component parts in order to understand them best.

Stanford University:
www.stanford.edu/

Stanford Department of Biology:
biology.stanford.edu/

Stanford University Channel on YouTube:
youtube.com/stanford

All Comments (21)
  • I am 75. Due to circumstances, I was not able to go on to higher education (college). I listen to this wonderful professor and deeply regret this fact of my life. I could listen endlessly, to the way he recreates history. Thanks to all who have made this possible.
  • @jeanbastien9424
    Since people keep asking, the book he mentions at the beginning is Chaos by James Gleick.
  • If anyone is interested the book Professor Sapolsky mentions is "Chaos: Making a New Science" by James Gleick. It introduces the principles and early development of the chaos theory in a general way without complicated mathematics. As an interesting aside: Professor Sapolsky has said the line about it being the first book he'd finished and started again since "Baby Beluga" as well as "Where The Wild Things Are." I guess the etiology of that is to be more relatable to current students. 😊
  • @Ryan-on5on
    Dr. Sapolsky's YouTube lectures are truly the most emotionally inspiring, intellectually stimulating, and flat-out entertaining free academic content out there on ANY subject or discipline. This gifted man has a knack for explaining complicated ideas and issues in the most basic and relatable of ways. Such a rhetorical skill is a talent many intelligent minds in the academy sorely lack, much to the detriment of their students' class performance. Thank you, Stanford administration, for making freely available to the public a window into the minds of your most brilliant faculty.
  • @funkygrow8738
    I can’t believe people still pay for education. This is just phenomenal information for free. I am literally a high school drop out in my mid 30’s and I watch Ivy League lectures all day. This is fricking amazing. This is how humanity is meant to be. Stanford is killing it. I’ve seen this guy in documentaries too and his ability to convey information is truly a blessing.
  • How Sapolsky is fascinated by the things he is explaining is really beautiful to see
  • @jasonqlwilliams
    My autoplay always brings me to these lectures, in order. I fall asleep to Alan Watts every night; I wake up multiple times a night during interesting parts of the lectures, my dreams have been very interesting.
  • @udderhippo
    Fantastic lecture. I really appreciate that the prof repeats an idea in slightly different words, just incase the first time didn't sink in. Great teacher. Thanks to Stanford for making all these excellent videos readily available.
  • @Agorante
    What makes a good teacher? I'll tell you now. I've taught a lot of classes in a lot of subjects. The single most important factor is if the teacher actually understands the subject. Most high school and college teachers only have a surface understanding of their subject. This means that they have a stereotyped presentation of the material. Usually they just explain a question in the same way that it was explained to them. Sapolsky is a really good teacher because for almost everything that he presents he has indeed figured it out for himself.  That doesn't mean that he is right about everything, but he has internalized everything he talks about.
  • This kinda stuff should be taught in public school IMO. Can you imagine how different some people would be if they were exposed to these different perspectives at a younger age? The world could be so different if people considered things in a more systematic view, non linear view, non periodic view, etc It’s super fucking important to gain different perspectives IMO and I think that’s in a painfully short supply throughout the vast majority of society. We’d all be better off if we realized our intuitions are so often wrong and that we cannot understand something without a thorough and nuanced examination, that we need a wide array of perspectives to come to a more objective understanding, etc. Whoever was a part of making this be available on YouTube, you’re doing a beautiful thing IMO. Thank you, truly.
  • @jaceybenton
    This feels timeless. I would love to be his student. His way with our language is immense. I would love to think and speak like this.
  • @neilanderson891
    Dr. Sapolsky refers to a book assigned to his students: "Chaos" by James Gleick . Chaos is the name given to a phenomena/problem that often occurs when a human tries to model (or simulate) a system run by Mother Nature, such as "weather models" (to predict the weather), or the "3-Body problem" in Physics, (to predict the position & velocity of 3 celestial bodies a number of days, months or years after the starting point, t = 0.)  For clarity herein, Mother Nature runs "systems" (such as the weather system, or the solar system), and humans run "models" that hopefully mimic or simulate such a system. Models are usually "moved forward" in discrete increments (such as in seconds, minutes, days, months, etc..) Mother Nature seems to move her systems forward continuously, not incrementally. But Mother Nature might also be limited to "increments", if there is such a thing as the Planck time unit. A model's initial conditions are used to calculate the new conditions one cycle (or step, or period) in the future (i.e., a second, day, month, year, etc.), which then become the initial conditions for the next cycle. If errors due to rounding (or truncation) accumulate, then that's probably a chaotic model.   (Alternatively, a model could be created with simultaneous equations, if there are enough equations to account for all the variables, however, something like that has not yet been successful, so far as I know.) The problem is that the results are highly dependent on initial conditions. If initial conditions are slightly changed, the end-results change drastically, much as if there were an element of "chaos" in the calculations.  Is Mother Nature's system also "highly dependent" on initial conditions? How would we ever know? "Chaos" might be merely an artifact of "moving the model forward" with millions of discrete increments, i.e., millions of seconds, minutes, days, months, etc., whereas "Mother Nature" might move the system forward in a continuous manner, not by discrete increments, but in infinitesimally small units, as in Calculus. (Alternatively, Mother Nature might be restricted by the Planck time unit.) If you vary a chaotic model's starting conditions by a very small amount, the model's (new) predictions will be vastly different a few "cycles" (or "periods") in the (surprisingly) near future. Chaotic Models always (as far as I know) make "period by period" (or cycle by cycle) predictions, whereby the "initial conditions" are the "starting-point" of the first "cycle" or "period" and the next cycle uses the predictions of the previous cycle to make new predictions. Thus, obviously, any small change in the initial conditions is compounded in future cycles. I believe Chaotic models are always thus "self referential". It seems like chaotic Models are "chaotic" simply because our (human) method of making predictions is literally "un-natural". When modeling the 3-Body Problem, you start with each Body's exact mass, position, and velocity (which, of course, includes direction of travel) and then calculate the instantaneous forces acting on each body to predict the position and velocity of each Body an hour later, or a minute later, or a second later, etc. This method could easily require trillions of calculations to move the model significantly into the future. This step-by-step method requires rounding interim results, which means each step "into the future" is merely an "estimate", which become the new starting conditions in the next step. Thus "chaos" might be an "artifact" of our "discrete-cycle & rounded results" methodology, unless Mother Nature's system is also subject to chaos.
  • 10:00 - Reductionism 19:19 - The failures of reductionism in cognition and neuropsychology.... Why? --> Higher processing (learning eg.) acts out in semi-randomly built neurological and cortical networks through synaptic plasticity (Hebbian theory)... 33:07 Negation of reductionistic theory in all bifurcating systems... - -> Neurological pathways are branched out in bifurcating systems --> higher complex cognition therefore operates through bifurcating (chaotically built) systems in the cortex --> no higher cognitive process looks neurologically the same in 2 people, even though they process the same task (twin studies).... 35:53 "You can't code for bifurcating systems via genoms or "grandmother neurons", there's simply not enough neurons or genoms to do that..." --> Reductionism fails miserably at this attempt... Hubel and Wiesel predicted that, and thus presumably changed direction of research field, in order to not confront their own reductionistic theory, when applying it to the higher cortical layers... 42:33 Westernised reductionistic theory is not applicable in biology as a "theory of everything". Other means of understanding HAS to be brought up to the forefront, in order to understand the emergence and complexity of all bifurcating systems ... 43:46 Chaotic systems --> Non-linear - and non-additive systems... 58:00 The discovery of chaos; When increasing enough amount of force on structured, reductive, linear systems there will eventually be a transition point, where it suddenly goes from high-factored predictable systems with many variables to unpredictable chaotic systems, with no periodicity and no repeating of patterns... 59:02 York's prediction of chaotic transition; Anytime you see periodicity of an odd number, you've just guaranteed that you've entered chaotic terrain... As soon as you see the first evidence of a system beginning to have three components (periods) before the pattern repeats, it's about to disappear into chaoticism - a pattern which never repeats... 1:00:17 The fallacy of negating chaotic systems in science - especially in biology... 1:02:30 Mathematical understanding of tendencies in chaotic systems vs. predictable systems. --> Linear systems have attractor points; when you mess with it, the system will equilibrate and go back to where it was in this stable, perfect exact state (number). Chaotic systems get's pulled by a "strange attractor" creating the butterfly winged pattern in a coordinate system. --> The numbers oscillates around the "strange attractor", but never fully hits the exact "perfect state" before it shifts direction... 1:16:09 Explanation of a fractal.
  • @jameshicks7125
    Thanks to Sapolsky and Stanford for these lectures. I have learned so much from them. The most notable thing I can remark on is that learning about neurobiology and neuroscience through Sapolsky, that living with major depression has gone from something delusional akin to an external agency meddling in my life to the understanding that, 'Oh my brain is doing this again.' It has really helped create some psychological and psycho epistemic space to manage it better and develop better tools.
  • @TheFusedplug
    Man I wish I was going to University in the USA just to learn stuff from this Professor .. on another note I'm 47 but have more hunger for learning than when I was in my late teens and 20's. I have a sense of enormous value for knowledge now
  • @psychiaTree
    This is the most amazing thing on YouTube.
  • @HToothrot
    The article he references at the end is: Reduction and variability in data: A Meta-Analysis [in] Perspectives in Biology and Medicine 1996 Vol 39 Issue 2 pp 193-203. If that's right then the crazy obsessive undergrad that did most of the work is called Steven Balt. What a legend
  • @choeyoonsun1
    Humanities and science come together in his lectures, truly enlightening and entertaining.
  • @ArmenTeKortvzw
    Both the awesome professor and this Stanford initiative give us hope that one day knowledge will become common rather than the object of monopolistic licensing.