CBMM10 Panel: Research on Intelligence in the Age of AI

123,574
0
2023-11-20に共有
On which critical problems should Neuroscience, Cognitive Science, and Computer Science focus now? Do we need to understand fundamental principles of learning -- in the sense of theoretical understanding like in physics -- and apply this understanding to real natural and artificial systems? Similar questions concern neuroscience and human intelligence from the society, industry and science point of view.

Panel Chair: T. Poggio
Panelists: D. Hassabis, G. Hinton, P. Perona, D. Siegel, I. Sutskever

cbmm.mit.edu/CBMM10

コメント (21)
  • @pablotano352
    The hardest benchmark in current AI is making Ilya laugh
  • @DirtiestDeeds
    Please keep doing these panels - the public needs to hear directly and regularly from the leaders in this field. The ambient noise, hype and huckstering grows more intense by the day.
  • Chapter 1: Introduction and Panelist Introduction (0:00-1:03) - Tomaso Poggio introduces the panel, noting changes due to events in Israel. - Amnon Shashua unable to attend, replaced by Pietro Perona. - Panel comprises three real and three virtual members. Chapter 2: Panel Discussion Objectives (1:03-2:20) - Poggio outlines the main discussion topics: 1. Comparison of large language models, deep learning models, and human intelligence. 2. Interrelation of neuroscience and AI. - Focus on fundamental principles and the 'curse of dimensionality' in neural networks. Chapter 3: Geoff Hinton's Perspective (2:20-7:02) - Hinton discusses neuroscience's impact on AI, particularly the concept of neural networks. - Mentions contributions like dropout and ReLUs from neuroscience. - Notes potential future developments like fast weights. - Suggests that AI developments might not always align with neuroscience insights. - Discusses AI's efficiency and potential surpassing human intelligence. Chapter 4: Pietro Perona's Insights (7:02-13:49) - Perona touches on embodied intelligence and the need for machines to understand causation. - Highlights the challenge in creating AI that can design and interpret experiments. - Discusses the role of theory in AI and the dynamic nature of technology. Chapter 5: David Siegel's Reflections (13:49-21:08) - Siegel emphasizes understanding intelligence as a fundamental human inquiry. - Advocates for a theory of intelligence and its importance beyond commercial applications. - Sees neuroscience and AI as complementary in developing a theory of intelligence. Chapter 6: Demis Hassabis' Contributions (21:08-29:07) - Hassabis discusses neuroscience's subtle influence on AI. - Emphasizes the need for empirical study and analysis techniques in AI. - Suggests independent academic research in AI for better understanding and benchmarking. Chapter 7: Ilya Sutskever's Viewpoints (29:07-34:19) - Sutskever speaks on the role of theory in AI and its relation to neuroscience. - Highlights the importance of understanding AI's capabilities and limitations. - Stresses the need for collaborative research and evaluation in AI. Chapter 8: Panel Discussion on Theory and Empirical Studies (34:19-43:35) - Panel engages in a discussion on the importance of theory, benchmarking, and empirical studies in AI. - Emphasizes the need for a deeper understanding of AI systems and their capabilities. Chapter 9: Audience Q&A and Panel Responses (43:35-End) - Audience members pose questions on various topics, including AI's creativity, neuroscience's contribution to AI, and future developments in AI architecture. - Panelists share their insights, experiences, and speculations on these topics. Chapter 10: Exploring AI-Enabled Scientific Revolution (1:10:05-1:16:17) - Discussion on AI's potential to drive a scientific revolution, particularly in fields like biology and chemistry. - Demis Hassabis emphasizes AlphaFold as an example of AI's contribution to science. - The role of AI in solving complex combinatorial problems and generating hypotheses. - David Siegel reflects on AI's potential in understanding the brain and its complexities. Chapter 11: Panel's Take on AI's Creativity and Originality (1:16:17-1:23:46) - Panelists debate the creative capabilities of current AI systems, specifically large language models. - Question raised about AI's ability to state new, non-trivial mathematical conjectures. - Discussion on different levels of creativity and AI's potential to reach higher levels of invention and out-of-the-box thinking. - Geoffrey Hinton expresses skepticism about AI doing backpropagation through time, and discusses AI's information storage capabilities compared to the human brain. Chapter 12: Breakthroughs in Neuroscience Impacting AI (1:23:46-1:27:17) (continued) - The panel discusses the significance of understanding learning mechanisms in the brain for advancing AI. - Speculation on whether the brain implements a form of backpropagation and its implications for AI. - The importance of identifying and understanding diverse neuron types in the brain and their potential influence on AI development. - The discussion highlights the complex relationship between neuroscience discoveries and AI advancements. Chapter 13: Closing Remarks and Reflections (1:27:17-End) - The panel concludes with reflections on the discussed topics, emphasizing the interplay between AI and neuroscience. - Tomaso Poggio and other panelists summarize key insights, reiterating the potential of AI in advancing scientific understanding and the importance of continuous exploration in both AI and neuroscience fields. - Final thoughts underscore the significance of collaborative efforts and open research in pushing the boundaries of AI and understanding human intelligence.
  • @KaplaBen
    24:10 Great analogy by Demis with the oil / internet data allowing us to sidestep difficult questions (learn/form abstract concepts, grounding) in AI. Brilliant
  • @urtra
    feeling that Demis driven by goal design, Ilya by his own seriousness and Sir.Hinton by his deep intuition.
  • Awesome insights, love the focus on technical details that aren’t usually covered in more mainstream interviews.
  • @BR-hi6yt
    Wow - what a treat for us nerds. Thank you so much.
  • @societyofmind
    For me, the main thing that LLMs show is that there is more than one way to generate natural language. A relatively simple model (like GPT) can generate very natural looking text BUT it requires an INSANE amount of training data. More training examples than any child or teenage could ever possibly hear. My 5 year old can comprehend and generate endless strings of natural language having heard less than 50 million words (most of which are redundant and far less diverse than the examples LLMs are trained on). Yet the algorithm in her brain easily exhibits intelligence. Now compare that to an LLM, even a simple one like GPT-1. The number of training tokens it needs to get even slightly close to comprehending and generating natural language is at least an order of magnitude more. All this tells me is that there are at least 2 ways to generate intelligence. A simple brute force transformer with an insane number of free parameters and orders of magnitude more training data is all that’s needed to learn the underlying statistics of human-generated language. It “displays” intelligence in a fundamentally different way than the brain, but does it actually teach us anything about how OUR brains work? That’s debatable. Evolution (over billions of years) discovered an exceptionally efficient algorithm for intelligence that requires extremely little energy to run and orders of magnitude less training. It’s fundamentally different, but that doesn’t necessarily mean that the brain is better. A less efficient / “dumber” algorithm might be able to achieve AGI as well, but it will need ungodly amounts of training data and free parameters to overcome its dumbness.
  • @kawingchan
    Sam Roweis (Hinton mentioned him for RELU), now thats a name i haven’t heard of for a while. Wish he had lived to see how this whole field developed. I really enjoyed his lectures, energy, and enthusiasm in ML.
  • @GeorgeRon
    An awesome discussion. These kind of panels where expert condensus and debates are exchanged would be great at staying grounded on AI.
  • @josy26
    We need more of this, extremely high signal/noise ratio
  • @sidnath7336
    All these scientists are incredible but I think Demis has the best approach to this - RL and Neuroscience is the way. If we want to understand how these models work and how to improve them, we need to first understand the similarities and differences between the human brain and these systems and then see which techniques can help create a "map" between the 2 i.e. through engineering principles. When Demis talks about "if these systems are good at deception" and then trying to express what "deception" means, I believe this is a fundamental step towards complete reasoning capabilities. Note: I tried this with GPT-4 - I prompted it to always "lie" to my questions but through a series of very "simple" connected questions, it started to confuse its own lies with truths (which touches upon issues with episodic memory). Additionally, due to OpenAI ideologies, the systems are supposed to only provide factual, non-harmful information so this can be tricky to deal with.
  • @vorushin
    On the fast weights: LORA* in LLMs seem to be the move in this direction. It also addresses the computation issues by nicely separating backbone and additional weights.
  • @modle6740
    Developmental neuroscience research, on both typical and atypical development of the "system," is interesting to consider. Things can go highly awry (in terms of both cognitive and personality development, for example), depending on when, in the developing system, certain state/spaces arise...and what is "underneath" them as a connected whole, in terms of the developing system that did not have a sensorimotor stage.
  • - Consider the role of theory in understanding and advancing AI (0:22) - Explore the relationship between AI models, human intelligence, and neuroscience (2:08) - Investigate the potential of AI to aid scientific discovery and problem-solving (14:02) - Discuss the creative capabilities of current and future AI systems, including large language models (1:21:53) - Debate the biological plausibility of backpropagation in the brain and its implications for AI (1:25:10)
  • @Karma-fp7ho
    This panel interview was difficult to find - I saw a clip on Wes Roth.
  • @grammy2838
    I have an intuition that to get to peak human cognitive ability we really need to work on building a vastly richer context before inference. Like the context can’t just be a few paragraphs of text, we need the model to have continuity between contexts on a very large scale and build a sense of self. I don’t think we need to mimic any specific parts of the human experience, we just need to inject the capacity for an unrelated past experience to provide context for a future experience so that the model can develop a truly unique frame of reference. The best execution of this would definitely be embodiment where the model can interact directly with the real world. I think it’s inevitable and it’s going to be the final step towards AGI, question is can it be achieved without embodiment?