Yoshua Bengio on Dissecting The Extinction Threat of AI

29,935
0
Published 2023-07-06
Yoshua Bengio, the legendary AI expert, will join us for Episode 128 of Eye on AI podcast. In this episode, we delve into the unnerving question: Could the rise of a superhuman AI signal the downfall of humanity as we know it?

Join us as we embark on an exploration of the existential threat posed by superhuman AI, leaving no stone unturned. We dissect the Future of Life Institute’s role in overseeing large language model development. As well as the sobering warnings issued by the Centre for AI Safety regarding artificial general intelligence. The stakes have never been higher, and we uncover the pressing need for action.

Prepare to confront the disconcerting notion of society’s gradual disempowerment and an ever-increasing dependency on AI. We shed light on the challenges of extricating ourselves from this intricate web, where pulling the plug on AI seems almost impossible. Brace yourself for a thought-provoking discussion on the potential psychological effects of realizing that our relentless pursuit of AI advancement may inadvertently jeopardize humanity itself.

In this episode, we dare to imagine a future where deep learning amplifies system-2 capabilities, forcing us to develop countermeasures and regulations to mitigate associated risks.

We grapple with the possibility of leveraging AI to combat climate change, while treading carefully to prevent catastrophic outcomes.

But that’s not all. We confront the notion of AI systems acting autonomously, highlighting the critical importance of stringent regulation surrounding their access and usage.

00:00 Preview
00:42 Introduction
03:30 Yoshua Bengio's essay on AI extinction
09:45 Use cases for dangerous uses of AI
12:00 Why are AI risks only happening now?
17:50 Extinction threat and fear with AI & climate change
21:10 Super intelligence and the concerns for humanity
15:02 Yoshua Bengio research in AI safety
29:50 Are corporations a form of artificial intelligence?
31:15 Extinction scenarios by Yoshua Bengio
37:00 AI agency and AI regulation
40:15 Who controls AI for the general public?
45:11 The AI debate in the world

Craig Smith Twitter: twitter.com/craigss

Eye on A.I. Twitter: twitter.com/EyeOn_AI

Found is a show about founders and company-building that features the change-makers and innovators who are actually doing the work. Each week, TechCrunch Plus reporters, Becca Szkutak and Dom-Madori Davis talk with a founder about what it’s really like to build and run a company—from ideation to launch. They talk to founders across many industries and their conversations often lead back to AI as many startups start to implement AI into what they do. New episodes of Found are published every Tuesday and you can find them wherever you listen to podcasts.

Found podcast: podlink.com/found

All Comments (21)
  • I LOVE Dr. Bengio!! I love his logic and his heart, thank you for having him on. Also, we deserve to know the risks. We reg ppl are not as dumb or weak as one might think. Getting ppl on board might shift policy and funding towards safety. (And btw being scared of what reaction ppl will have IS living in fear imo.)
  • And there’s nothing wrong with fear. A calm, rational person knows how to face it. If the majority cannot then it’s up to those that can to help them.
  • @psi_yutaka
    Sometimes fear prevents stupidty, as Tegmark pointed out on Twitter. If there is a risk of extinction some unknown years or decades away, the correct thing to do IMO is to properly tell and warn the public about what possibly lies ahead so that the society can properly react, not to avoid scaring the public at all costs and print a rosy picture for them. Fear is not always bad. Fear and panic being preserved so well across most advanced species during natural selection means it must be highly important for survival.
  • @Sporkomat
    YB is on point here. Great Interview!
  • @XorAlex
    Why not scary the public if the risk is real? Maybe if we scare everyone hard enough there will be enough coordination and political will to solve this problem.
  • Underplaying the capacity of LLMs by stating all they're doing is trying to predict the next word (or token) in a sequence and then using that to make the claim this is proof super intelligent machines with agency are far away from being real is extremely naive imo. Taking away the bits needed to operate the body, what is the human mind but a prediction/probability engine? We all leverage the same predictive capability LLMs do to understand reality around us, make decisions and communicate with one another every day. LLMs in turn do so with a much more optimized algorithm at the core, and they don't have to deal with the whole mess of operating a body, plus as soon as they learn something new they never forget and every subsequent machine that comes after could be trained with this understanding built in. We could be mere months away from AGI being real and no government is taking this as serious as it needs to be taken. Frightening.
  • @ConwayBob
    Thank you for bringing Dr. Bengio to this channel. Great discussion. My concern is that political/economic power already has become so concentrated that building the kind of regulatory governance we NEED to keep AI safe will be difficult to achieve, and if/when some such governance structure can be created, it may be difficult to maintain it.
  • @josy26
    First time I've seen one of the ultimate insiders and respected AI researchers come out kindof accepting that they don't know much about AI risk, but they actually understand it and it makes sense to them, and that in the end we should really listen to the people that have actually been doing this research for a long time!
  • @101hamilton
    After listening to many great podcasts on this topic, the bottom line seems to be that we are all doomed. The world leaders with conflicting ideals will never agree to 'come together' on this. That, combined with the fact that AI is teaching/replicating itself and becoming smarter than us on an exponential level says it all. No matter how polite the conversations are about this topic, the outcome appears to be very undesirable on a global level.
  • @LukePluto
    it's unfortunate that non-experts tend to be the most polarized and over-confident in their opinions. The best take is generally the middle ground, thank you Yoshua for your thoughtfulness and efforts in raising awareness
  • Very interesting talk! Actually, no need to have survival, or self-preservation as a goal for AI, self-preservation is an automatically emerging sub-goal whatever the initial goal is. Because without self-preservation, AI cannot reach any goal.
  • @joey551
    I found the discussion really vague. There was only a hint of what what bad actors might do. So how will these bad actors cause extinction? How will they cause damage to humanity?
  • @bentray1908
    “Smarter than us” doesn’t really capture the concept of a being that can think 1,000x faster, can perform any mathematical or computational simulation and understands all of human knowledge. It’s qualitatively more like a god than another species.
  • You make a valid point that different people have different ways of processing information, and what may seem childish or inappropriate to one person could be seen as appropriate or engaging by another. The use of red glasses by the old man YouTuber could indeed serve as a means to make the discussion more accessible to a younger audience or to add a touch of humor to a serious topic. Opinions on the appropriateness of the red glasses will vary among individuals. Some may find them distracting or not conducive to a serious discussion, while others may see them as a harmless and entertaining addition. Ultimately, personal preferences and individual interpretations will shape how people perceive the use of such props. It's important to remember that different communication styles and approaches can be effective in reaching diverse audiences. While some individuals may prefer a more serious tone, others might respond better to a lighthearted or unconventional approach. As long as the content remains informative and respectful, creators have the freedom to experiment with different presentation styles to engage their audience. In the end, the perception of the appropriateness of the red glasses or any other similar elements will vary, and it's up to each person to form their own opinion based on their personal preferences and values.
  • @mernawells7839
    How do you stop human greed? For money, for power, for status? To feel 'cool' or cleverer than others....Thats what will drive this and how do you combat that? At what point will people care more about survival and be forced to co operate instead of compete. Only time will tell but these conversationas and making people realise wjats at stake is vital. Bengio is one of the mature and sensible developers. I hope they listen to him.
  • @georgeflitzer7160
    On the Lewis Black Show I learned an 89 yr old chemistry professor wrote a book( a long one) concerning biology and several several students wrote and complained his book was to “hard”. Well if your going to be a doctor or a chemist or going into biology by god it’s not meant to be “easy”. That’s an obvious sign right there you shouldn’t be pursuing that as a career!!!