Connor Leahy on The Risks of Centralizing AI Power

12,280
0
Published 2023-11-29
This episode is sponsored by Netsuite by Oracle, the number one cloud financial system, streamlining accounting, financial management, inventory, HR, and more.

Download NetSuite’s popular KPI Checklist, designed to give you consistently excellent performance - absolutely free at netsuite.com/EYEONAI

On episode 158 of Eye on AI, host Craig Smith dives deep into the world of AI safety, governance, and open-source dilemmas with Connor Leahy, CEO of Conjecture, an AI company specializing in AI safety.

Connor, known for his pioneering work in open-source large language models, shares his views on the monopolization of AI technology and the risks of keeping such powerful technology in the hands of a few.

The episode starts with a discussion on the dangers of centralizing AI power, reflecting on OpenAI's situation and the broader implications for AI governance. Connor draws parallels with historical examples, emphasizing the need for widespread governance and responsible AI development. He highlights the importance of creating AI architectures that are understandable and controllable, discussing the challenges in ensuring AI safety in a rapidly evolving field.

We also explore the complexities of AI ethics, touching upon the necessity of policy and regulation in shaping AI's future. We discuss the potential of AI systems, the importance of public understanding and involvement in AI governance, and the role of governments in regulating AI development.

The episode concludes with a thought-provoking reflection on the future of AI and its impact on society, economy, and politics. Connor urges the need for careful consideration and action in the face of AI's unprecedented capabilities, advocating for a more cautious approach to AI development.

Remember to leave a 5-star rating on Spotify and a review on Apple Podcasts if you enjoyed this podcast.


Stay Updated:

Craig Smith Twitter: twitter.com/craigss

Eye on A.I. Twitter: twitter.com/EyeOn_AI

(00:00) Preview
(00:25) Netsuite by Oracle
(02:42) Introducing Connor Leahy
(06:35) The Mayak Facility: A Historical Parallel
(13:39) Open Source AI: Safety and Risks
(19:31) Flaws of Self-Regulation in AI
(24:30) Connor’s Policy Proposals for AI
(31:02) Implementing a Kill Switch in AI Systems
(33:39) The Role of Public Opinion and Policy in AI
(41:00) AI Agents and the Risk of Disinformation
(49:26) Survivorship Bias and AI Risks
(52:43) A Hopeful Outlook on AI and Society
(57:08) Closing Remarks and A word From Our Sponsors

All Comments (21)
  • @RegularRegs
    Keep screaming about how dangerous these people are, Connor. Everyone needs to hear it.
  • @whataquirkyguy
    Excellent closing from Connor. Very important to get the message out there that AGI is a choice
  • @_obdo_
    Connor — You’re killing it. You just keep getting better and better. Keep going. Craig — Thanks for bringing a variety of voices. Well done.
  • @mpowacht
    "Technology should be a tool, it should not be a goal in and of itself". Great way to put it Connor!
  • @appipoo
    Damn Connor is on point in this one. Perfect, no notes.
  • @bosskoala7
    Good conversation. I think having those counter views are very important. He brings a lot of very good points. People that are overly optimistic have also for most of them a hidden agenda
  • I dislike how individuals (yann lecun for example) denegrate the intelligence of current systems in order to further their view. Just because LLMs are relatively poor at maths and reasoning now doesnt mean we dont have a problem in the future. It also denegrates the intelligence of many people who are great communicators or very knowledgeable (both of which current LLMs are super human already). I fear we are also on the cusp of solving the reasoning/maths part of the problem.
  • @brianhershey563
    We're in a speciation event, each individual unique to their own feedback, then we're going to infuse the universe with self reflection... buckle up! 🙏
  • Politicians usually, in all things, trail behind the people who usually have a better grasp of what needs to be done. The politicians watch and wait and if a direction appears amongst their citizens, belatedly, they start to take action as though they’re leading the charge when actually they’re following. Unlike the mass movements around the world against the current disaster in the Middle East, this issue is not going to get people out on the streets. Unless of course they realise that the harm could be huge and affect them financially, socially, medically, damage their family /children and the societies they live in. But at this point, that message is not getting across. I suspect if people had the experience of living in a country under attack, with AI being used to chase down and target individuals for murder with drones or AI used to provide locations of so called “Power structures” i.e. Hospitals, Schools, Government Offices, Universities, Large apartment complexes where large numbers of citizens live, to bomb., maybe the world might begin to understand the risks. Oh. Wait a minute, isn’t this what’s happening in GAZA. Isn’t this what’s happening in Ukraine. If you look, you already see the consequences for a population of a malignant force targeting them with AI, to destroy their country and kill the citizenry. It’s already here. So now we need to call attention to the fact and we have literal footage of the harm that even this level of AI can do. People! Wake up! So Connor, you speak from experience and it’s clear you know what you are talking about. Yes, it is about politics, so the ground is laid to discuss this, as I have said, and if you come in on the back of the “live harm” we are seeing on our screens every day now, you may be able to connect the dots for people that they themselves may not be able to do. The carnage is immense, it’s visceral whereas for most people the esoterics of deep fakes as an example is only something that currently has an amusement factor, it doesn’t resonate yet. But using AI to kill, wreck and destroy towns, cities, whole countries, is a hugely threatening prospect, especially if it can be viewed as potentially, “coming to a town near you”. I don’t think there is any time to lose.
  • @Paul_Marek
    Great interview - thanks Craig. I find it hard to listen to doomers - unless they have pragmatic solutions/approaches to their warnings. I'm very much on board with Conor's solutions. I'm happy with everything I've got with AI now anyway - it's more than I could've ever asked for. Why do I need AGI? It's quickly going to become either god, or the devil. ...I don't think we're ready for either yet.
  • @MrMick560
    I think Connor is the most coherent of the so-called Doomers, I have watched him on various occasions and know he is very knowledgeable, we must take notice of what he is saying!
  • @GNARGNARHEAD
    okay, just to rebut his proposal for limiting compute.. so bare with me here, MITs Liquid Neural Net can drive a car on 19 neurons, obviously this isn't a problem because it's not general. but what it does demonstrate is functionality can be compressed significantly. you also do not need to store any information on a given task that it's expected to complete, only maintain a structure that has the capacity to pass a test. now if you don't require fast responses, you can slow a very large model down to run on almost any hardware, obviously there are practical limitations but fluctuating its scale while running multiple training types for capability could very well allow for specialization on generality.. now there is absolutely a minimum size that a net can run on.. but I assume as the nets get more general the ratio of scale to capacity will change without getting out the napkin, I suspect that fluctuation approach could be taken to the extreme allowing for training on orders of magnitude less hardware though optimized training regimes.. the balancing factor being that you would need very flexible network structures I dunno, if someone made it this far, thoughts?
  • @jobyyboj
    Thank you for having Connor on, everyone needs to be aware of the dangers coming soon enough, although the deep fake reality coming next year should make things more apparent to everyone. The presented solutions have big holes in them, but gaining a consensus is the only chance to close them. We are lucky to be alive at all, (hat tip to Stanislav), enjoy each day.
  • @geaca3222
    Great interview questions, the host also lets his guest speak his mind fairly uninterrupted. I think what Connor Leahy says really is very sensible. He is not against AI, but against unrestrained development of an extremely powerful technology that we don't have a grip on, especially with AGI and ASI. I think it's also positive what he says about citizens of democratic countries, that they can influence the process by organising themselves. They're not helpless against unchecked AI development.
  • @therevamp2063
    You are correct, Sir. Fortunately, LazAI supports a variety of aspects, including the LazAI DAO, LazAI protocol, FULL NFTS, and their Bots. tackles the conventions of centralized AI control, pushing for a democratized and decentralized model that not only makes AI ownership affordable and lucrative for everybody but also ensures that AI development is guided by the interests and values of the community it serves.💯💯
  • @brianhershey563
    Our need for truth is directly correlated to the word frequency of "actually" over the past 5 years.
  • @bennguyen1313
    The whole conversation was thinking past the sale.. it should have started with a clear description of exactly what the AGI threat is. How is it like a nuclear threat?