“Godfather of AI” Geoffrey Hinton Warns of the “Existential Threat” of AI | Amanpour and Company

646,045
0
Published 2023-05-09
Geoffrey Hinton, considered the godfather of Artificial Intelligence, made headlines with his recent departure from Google. He quit to speak freely and raise awareness about the risks of AI. For more on the dangers and how to manage them, Hinton joins Hari Sreenivasan.

Originally aired on May 9, 2023

----------------------------------------------------------------------------------------------------------------------------------------

Major support for Amanpour and Company is provided by the Anderson Family Charitable Fund, Sue and Edgar Wachenheim, III, Candace King Weir, Jim Attwood and Leslie Williams, Mark J. Blechner, Bernard and Denise Schwartz, Koo and Patricia Yuen, the Leila and Mickey Straus Family Charitable Trust, Barbara Hope Zuckerberg, Jeffrey Katz and Beth Rogers, the Filomen M. D’Agostino Foundation and Mutual of America.

Subscribe to the Amanpour and Company. channel here: bit.ly/2EMIkTJ

Subscribe to our daily newsletter to find out who's on each night: www.pbs.org/wnet/amanpour-and-company/newsletter/

For more from Amanpour and Company, including full episodes, click here: to.pbs.org/2NBFpjf

Like Amanpour and Company on Facebook: bit.ly/2HNx3EF

Follow Amanpour and Company on Twitter: bit.ly/2HLpjTI

Watch Amanpour and Company weekdays on PBS (check local listings).

Amanpour and Company features wide-ranging, in-depth conversations with global thought leaders and cultural influencers on the issues and trends impacting the world each day, from politics, business and technology to arts, science and sports. Christiane Amanpour leads the conversation on global and domestic news from London with contributions by prominent journalists Walter Isaacson, Michel Martin, Alicia Menendez and Hari Sreenivasan from the Tisch WNET Studios at Lincoln Center in New York City.

#amanpourpbs

All Comments (21)
  • "Humanity is just a passing phase in the evolution of intelligence." That hits deep.
  • @Sashazur
    It’s literally like we’re building an alien invasion fleet and pointing it straight at our planet. Their scouts are already here. The only thing we don’t know is exactly when the main force will arrive and how much more powerful they’ll be compared to us.
  • When a scientist/master expert says something like this, it means things are serious and we're as always told just a part of the whole story. AI is dangerous when combined with other stuff because: 1. it will be used for military and bad things first, like every other invention 2. it's like a bacteriologic/virologic weapon, when you release it and think you can control it, but once it's free.. well.. we know how it goes.. 3. once it goes, we have NO idea on what next or what will happen, yet we push it big time 4. as some visionary may say to implement it and accept it, it's faster, better, stronger and it can connect. Once it learns how things work, it it on it's own. It can connect, share, multiply, merge, hide.. We think we know everything but the reality is far off..
  • @kurtdobson
    If you understand how the current 'large language models', like GTP, Llama, etc.,work, it's really quite simple. When you ask a question, the words are 'tokenized' and this becomes the 'context'. The neural network then uses the context as input and simply tries to predict the next word (from the huge amount of training data). Actually the best 10 predictions are returned and then one is chosen at random (this makes the responses less 'flat' sounding'. That word is added to the 'context', and the next word is predicted again, and this loops until some number of words are output (and there's some language syntax involved to know when to stop). The context is finite, so as it fills up, the oldest tokens are discarded... The fact that these models, like ChatGTP, can pass most college entrance exams surprised everyone, even the researchers. The current issue is that the training includes essentially non factual 'garbage' from social media. So, these networks will confidently output complete nonsense occasionally. What is happening now, is that the players are training domain-specific large language models using factual data; math, physics, law, etc. The next round of these models will be very capable. And it's horse-race between Google, MicroSoft (OpenAI), Stanford and others that have serious talent and compute capabilities. My complete skepticism on 'sentient' or 'conscious' AI is because the training data is bounded. These networks can do nothing more than mix and combine their training data to produce outputs. This means they can produce lots of 'new' text, audio, images/video, but nothing that is not some combination of their training data. Prove me wrong. This doesn't mean it won't be extremely disrupting for a bunch of market segments; content creation, technical writing, legal expertise, etc., medical diagnostics will likely be automated using these new models and will perform better than most humans. I see AI as a tool. I use it in my work to generate software, solve math and physics problems, do my technical writing, etc. It's a real productivity booster. But like any great technology it's a two-edged sword and there will be a huge amount of fake information produced by people who will use it for things that will not help our societies... Neural network do well at generalizing, but when you ask them to extrapolate outside their training set, you often get garbage. These models have a huge amount of training information, but it's unlikely they will have the human equivalent of 'imagination', 'consciousness', or be sentient. It will be interesting to see what the domain-specific models can do in the next year or so. Deep Mind already solved two grand challenge problems, the 'Protein Folding' problem and the 'magnetic confinement' control panel for nuclear fusion. But I doubt that the current AI's will invent new physics or mathematics. It takes very smart human intelligence to guide these models for success on complex problems. One thing that's not discussed much in in AI, is what can be done when Quantum Computing when combined with AI. I think we'll see solutions to a number of unsolved problems in biology, chemistry and other fields that will represent great breakthroughs that are useful to humans living on our planet. - W. Kurt Dobson, CEO Dobson Applied Technologies Salt Lake City, UT
  • @sepiae
    It seems that a couple of times Mr. Sreenivasan did not really understand what Mr. Hinton was trying to convey here. There were moments when he reacted as if Mr. Hinton had said something jocular, while in fact he'd been deadly serious with everything he said. The interview ended with 'this wall in the fog might be 5 years away.' That's pretty chilling.
  • Interviewer is calm for someone who was just told 'You'll lose your job, but it won't matter because you'll be extinct'
  • @sandrag8656
    I see following scene coming: Humanity has driven itself into a huge catastrophy and does rely more on intelligence of Ai than on the own. Ai will be asked about the way out. Ai will present one or several answers. We can't think so many steps ahead as Ai can. We will never be able to see what's the ultimate goal of Ai. It could play tricks on us, without us recognising it.
  • It is chilling to think how fast AI can develop and how slow humans are at adapting to change.
  • Seen every interview of this guy since he came out with it. This is by far the best. Bravo to the interviewer. Subscribed
  • @MoodyG
    As someone who's working on AI algorithms for his PhD work, when I see Hinton saying that he suddenly realized this or that after so many years working in the field seems to me more like a way of him saying he's recently seen something profound that caused a huge shift in his thoughts/expectations about the nature of AI systems and what they can do, and it seems that it scared him which might be an indication he's not telling the whole story, or more aptly put, the interesting/scary part about it... signed an NDA before leaving Google ?
  • @avjake
    That was the most insightful interview with Hinton I have seen so far. Good work.
  • @Dinastiagrupo
    Hinton demonstrates excellent discernment starting in minute 16': not one to panic, he underscores the areas of benefit, reason for which development will not stop. And, then he identifies the problem: not enough research (1 %) addresses control. Admirable clarity of thought!
  • @roaxle
    "Open the pod bay doors, Hal." "I'm sorry, Dave. I'm afraid I can't do that."
  • @MrErick1160
    The analogy about the fog and the wall, and how we're entering a faze of huge uncertainty is really on point.
  • @bhvnraju8493
    Mind blowing conversation by mr Jeffrey Hinton not only as THINKER but also as a WELL WISHER to the MANKIND, Thanks a lot 🙏
  • @silviin
    Respect for this man for standing up for this! We as society should seriously stand up for this...
  • @NobleSainted
    The clarity of Geoffrey Hinton's descriptions are stunning. I've been trying to find ways in which to describe to my family, friends, and acquaintances how A.I. could be very dangerous, and to what scale, and this man vocalized it perfectly and with associating analogies. What a wonderful discussion.
  • Excellent and perceptive interview. Hinton has an ethical disposition and clarity forged through long term experience and observation,characteristics perhaps not fully comprehended by the current generation. My intelligent grandchildren dismiss some of my opinions solely on the basis that they have no experiential foundation from which to comprehend my viewpoint, which is why I understand them better than they understand me ! 🙄
  • Thanks very much Geofrey ,your explanation is clear,accurate and understandable.
  • @psi_yutaka
    This one is disappointing. The fate of humanity SHALL NOT be handed to a few unelected CEOs and "engineers with first hand experience" and just let them play with fire and hope for the best. This is wrong, irresponsible and extremely unfair for the 99.999999% humans who never had a say in this madness. True that it must be hard to try to stop the progress of such a useful technology. It does not mean we shouldn't at least give a try in the first place. Stop worshiping tech progress as if it were some form of sacred law of physics. There is this thing called diplomacy that we humans know how to do.