Connor Leahy - e/acc, AGI and the future.

Published 2024-04-21
Connor is the CEO of Conjecture and one of the most famous names in the AI alignment movement. This is the "behind the scenes footage" and bonus Patreon interviews from the day of the Beff Jezos debate, including an interview with Daniel Clothiaux. It's a great insight into Connor's philosophy.

Support MLST:
Please support us on Patreon. We are entirely funded from Patreon donations right now. Patreon supports get private discord access, biweekly calls, very early-access + exclusive content and lots more.
patreon.com/mlst
Donate: www.paypal.com/donate/?hosted_button_id=K2TYRVPBGX…
If you would like to sponsor us, so we can tell your story - reach out on mlstreettalk at gmail

Topics:
Externalized cognition and the role of society and culture in human intelligence
The potential for AI systems to develop agency and autonomy
The future of AGI as a complex mixture of various components
The concept of agency and its relationship to power
The importance of coherence in AI systems
The balance between coherence and variance in exploring potential upsides
The role of dynamic, competent, and incorruptible institutions in handling risks and developing technology
Concerns about AI widening the gap between the haves and have-nots
The concept of equal access to opportunity and maintaining dynamism in the system
Leahy's perspective on life as a process that "rides entropy"
The importance of distinguishing between epistemological, decision-theoretic, and aesthetic aspects of morality (inc ref to Hume's Guillotine)
The concept of continuous agency and the idea that the first AGI will be a messy admixture of various components
The potential for AI systems to become more physically embedded in the future
The challenges of aligning AI systems and the societal impacts of AI technologies like ChatGPT and Bing
The importance of humility in the face of complexity when considering the future of AI and its societal implications

TOC:
00:00:00 Intro
00:00:56 Connor's Philosophy
00:03:53 Office Skit
00:05:08 Connor on e/acc and Beff
00:07:28 Intro to Daniel's Philosophy
00:08:35 Connor on Entropy, Life, and Morality
00:19:10 Connor on London
00:20:21 Connor Office Interview
00:20:46 Friston Patreon Preview
00:21:48 Why Are We So Dumb?
00:23:52 The Voice of the People, the Voice of God / Populism
00:26:35 Mimetics
00:30:03 Governance
00:33:19 Agency
00:40:25 Daniel Interview - Externalised Cognition, Bing GPT, AGI
00:56:29 Beff + Connor Bonus Patreons Interview

Disclaimer: this video is not an endorsement of e/acc or AGI agential existential risk from us - the hosts of MLST consider both of these views to be quite extreme. We seek diverse views on the channel.

All Comments (21)
  • @Kwalk1989
    I literally was like what is MLST up to? I search and the video was 27 seconds old.
  • @macawism
    Tell it to someone in Gaza or the Ukraine or a Rohingya about a benchmark for values.
  • @itsallgoodaversa
    Love the look on your shots. Which G Master lenses are you using? 50mm? 35?
  • @dhudson0001
    Absolutely the best content being made available to everyone on this topic right now. fucking outstanding!
  • @pcdoodle1
    I liked the quote: "Our culture is where most of our collective cognition happens". Lot's of things I'm not a fan of here but they speak for themselves IMO.
  • @gdhors
    It would be interesting to hear conor and Daniel schmachtenberger have a discussion exploring where ethics, morality, and alignment intersect. Conor would be great to provide a technological framework for Daniel's ideas on a more ethical moral society..
  • @JD-jl4yy
    51:28 This is the core of the disagreement. Smart people can think about AI and form a wide variety of inside views. They all disagree with each other. What I see the e/acc people doing is arrogantly sticking to their particular inside view, while the existential risk people are more epistemically humble, aggregating predictions from everyone that has rigorously thought about this and build a calibrated outside view. This outside view puts some credence on existential risk, so they (imo correctly) identify that as a very big deal and try to figure out how we could mitigate it. *obviously not all of them, some like Eliezer Yudkowsky also stick to a similarly overconfident inside view **if you agree xrisk is a big deal but disagree about how people are currently going about it, you're still fundamentally on the same side here
  • @stevengill1736
    We are edge dwellers thermodynamically, you're right. Whatever philosophy you have, there's an innate tendency of life to become more complex, to use entropy to create anti entropic organization. They say there's less hunger, less sickness, and fewer wars than ever historically. I hope we continue to improve and work out our petrochemical addiction! Thank you guys kindly for sharing your visions....
  • @stuartmarsh5574
    I loved when he said life was a swirl of ink in water, I had this visualization of life in my head that is almost like an expanding fractal.
  • @chazzman4553
    You guys make people's heads cracking, for sure :). Mind riveting talk.
  • @masoncusack
    Whoa, where do I know Daniel's voice from?
  • @KibberShuriq
    What's up with Connor's forehead being cut off in like half the frames? Avant-garde video editing much?
  • @u2b83
    1:06:04 Conversely, Kazinsky was an "anti-accelerationist" lol
  • @JD-jl4yy
    Thank you for showing different perspectives! A shame that the comments here are mostly cheap ad homs than don't engage with any of the ideas.
  • @mikezooper
    The Professor is missing the point. LLMs are our best way to communicate with tech. Reasoning and planning can be added another way.