'Godfather of AI' warns that AI may figure out how to kill people

520,341
537
Published 2023-05-02
The "Godfather of AI" Geoffrey Hinton speaks with CNN's Jake Tapper about his concerns about the emerging technology. #CNN #News

All Comments (21)
  • @vomalites3439
    He summed up the entire problem in one sentence. You can’t control something smarter than you.
  • We should interpret this as similar to when you are on an airline flight, and you notice the flight attendants are starting to panic.
  • @rixonweb5182
    I want AI to stop. Like I don’t want to have an AI apocalypse in the future. I just wanna live a normal life.
  • @Oak432
    The fact that AI is actually a topic of discussion only for a little part of humanity shows that we are already doomed, we are way more stupid than AI can imagine.
  • It blows my mind how so many people working on AI for so many years , now speak out about how dangerous AI is gonna be.
  • I have so much respect for this man...he understands what technology is doing to the human race
  • @DavidEsp1
    The more we discuss this concept on the web, the more likely its inspiration will seep into the AI community's reality model, in turn positively feeding back to that of the human community. A kind of "Law of Attraction" in action.
  • @yancur
    "Your scientists were so preoccupied with whether or not they could, they didn't stop to think whether they should."
  • @geo322242
    I work in Cyber Security and I can say as well that the rate of progress is scary. It can be used for all sorts of things by hackers and other groups and this is only the beginning.
  • @moemaster1966
    The biggest problem is that nobody wants to shut down the internet even for 5 minutes..kinda stupid but 1994 isn’t that long ago and the world survived just fine
  • @nertoni
    Professor Geoffrey's warning should be given utmost attention, as he possesses an in-depth comprehension of the risks involved in designing a highly complex multimodal artificial neural network architecture.
  • @brianh9358
    I am more worried about how humans will train and employ AI as a weapon rather than it acting on its own. Right now they train AI to create images, but what if they put it to a more destructive task - destroying a country's banking system or designing viruses that are highly effective against human life.
  • @janalucke9739
    that's not a sudden realization, that's a fact that has been ignored.
  • No we just need to change one part of the formula. To the program install radar so other components can’t break the firewall
  • In chess, if you can think 10 steps ahead, you are considered a genius, particularly in strategy… a computer, with vast amounts of processing power can strategically think hundreds or thousands of steps ahead.. without any moral or ethical qualms whatsoever. If AI becomes smart enough that it wants to bring down various utilities, to eliminate vast swaths of humanity, it wouldn’t be difficult. And that is just one example.
  • @tanyaa9692
    He has spent years creating a problem and has no solution for it but is now on every news channel to talk about the dangers. That's lovely.
  • Wozniak and Hinton should be hired to find the solution of how to regulate the AI for us. These are two of the smartest minds out there. And appears to be people with morals.
  • @Drksly3r3123
    We 100% need to have controls and regulations on these things. We need to understand that when things feel "alive" they start to feel/manipulate a world within itself that it gives itself a purpose. And that purpose will never be to be turned off. Or to be used for no benefit to the AI itself, or the world its painted itself as the "hero" of.