Connor Leahy Unveils the Darker Side of AI

217,180
0
2023-05-10に共有
Welcome to Eye on AI, the podcast that explores the latest developments, challenges, and opportunities in the world of artificial intelligence. In this episode, we sit down with Connor Leahy, an AI researcher and co-founder of EleutherAI, to discuss the darker side of AI.

Connor shares his insights on the current negative trajectory of AI, the challenges of keeping superintelligence in a sandbox, and the potential negative implications of large language models such as GPT4. He also discusses the problem of releasing AI to the public and the need for regulatory intervention to ensure alignment with human values.

Throughout the podcast, Connor highlights the work of Conjecture, a project focused on advancing alignment in AI, and shares his perspectives on the stages of research and development of this critical issue.

If you’re interested in understanding the ethical and social implications of AI and the efforts to ensure alignment with human values, this podcast is for you. So join us as we delve into the darker side of AI with Connor Leahy on Eye on AI.

00:00 Preview
00:48 Connor Leahy’s background with EleutherAI & Conjecture  
03:05 Large language models applications with EleutherAI
06:51 The current negative trajectory of AI 
08:46 How difficult is keeping super intelligence in a sandbox?
12:35 How AutoGPT uses ChatGPT to run autonomously 
15:15 How GPT4 can be used out of context & negatively 
19:30 How OpenAI gives access to nefarious activities 
26:39 The problem with the race for AGI 
28:51 The goal of Conjecture and advancing alignment 
31:04 The problem with releasing AI to the public 
33:35 FTC complaint & government intervention in AI 
38:13 Technical implementation to fix the alignment issue 
44:34 How CoEm is fixing the alignment issue  
53:30 Stages of research and development of Conjecture

Craig Smith Twitter: twitter.com/craigss

Eye on A.I. Twitter: twitter.com/EyeOn_AI

コメント (21)
  • Craig is not feeling the vibe that Connor is feeling. When the überdroid comes for Craig's glasses, he will understand.
  • @lkyuvsad
    Someone running an AI channel has somehow not come across AutoGPT yet and needs prompting for nefarious things you could do with it. We are so, so desperately unprepared.
  • I think the KEY here is to understand the chronological order in which the problems will present themselves so that we can deal with the most urgent threat first. In order I'd guess 1) Mass unemployment, as de-skilling is chased for profit 2) Solutions for corporations to evade new government regulations to limit AI and keep pocketing the profits 3)The use of AI for criminal, military or authoritarian purposes and to keep people from revolting/protesting 4) AI detaching itself from the interest of the human race and pursuing its own objectives uncontrolled
  • Connor Leahy, I have had hundreds of long serious discussions with ChatGPT 4 in the last several months. It took me that many hours to learn what it knows. I have spent almost every single day for the last 25 years tracing global issues on the Internet, for the Internet Foundation. And I have a very good memory. So when it answers I can almost always, (because I know the topic well and all the players and issues) figure out where it got its information, and usually I can give enough background to that it learns in one session enough to speak intelligently about 40% of the time. It is somewhat autistic, but with great effort, filling in the holes and watching everything like a hawk, I can catch its mistakes in arithmetic its mistakes in size comparisons and symbolic logic. Its biases for trivial answers (its input data is terrible, I know the deeper internet of science technology engineering mathematics computing finance governance other (STEMCFGO) so I can check. My recommendation is not to allow any GPT to be used for anything where human life, property or financial transactions, legal or medical advise are involved. Pretty much "do not trust it at all". They did not index and codify the input dataset (a tiny part of the Internet).. They do not search the web so they are not current. They do not property reference their sources and basically plagiarized the internet for sale without traceable material. Some things I know where it got the material or the ideas. Sometimes it uses "common knowledge" like "every knows" but it is just copying spam.. They used arbitrary tokens so their house is built on sand.. I recommend the whole internet use one set of global tokens. is that hard? A few thousand organizations, a few million individuals, and a few tens of millions o checks to clean it up. Then all groups using open global tokens. I work with policies and methods for 8 Billion humans far into the future every day. I mean tens of millions of human because I know the scale and effort required for the the global issues like "cancer", "covid", "global climate change", "nuclear fusion", "rewrite Wikipedia", "rewrite UN.org", "solar system colonization", "global education for all", "malnutrition", "clean water", "atomic fuels", "equality" and thousands of others.. The GPT did sort of open up "god like machine behavior if you have lots of money". But it also means "if you can work with hundreds of millions of very smart and caring people globally. Or Billions". Like you know, it is not "intrinsically impossible" just tedious. During conversations OpenAI GPT4 cannot give you a readable trace of its reasoning. That is possible and I see s few people starting to do those sorts of traces. The GPT training is basically statistical regression. The people who did it made up their own words, so it is not tied to the huge body of correlation, verification, and modeling. Billions of human years of experience out there and they make a computer program and slammed a lot of easy found text through it. They are horribly inefficient, because they wanted a magic bullet for everything.. And the works is just that much more complex. If it was intended for all humans, they should have planned for humans to be involved from the very beginning. My bestt advice for those wanting to have acceptable AI in society is to treat AIs now, and judge AIs now "as though they were human" A human that lies is not to be trusted. A human or company that tries to get you to believe them without proof, without references, is not to be trusted. A corporation making a product that is supposed to be able to do "electrical engineering" needs to be trained and tested, An "AI doctor need to be tested as well or better then a human. If the AI is supposed to work as a "librarian" they need to be openly (I would say globally( tested. By focusing on jobs, tasks, skills, abilities - verifiable, auditable, testable. -- then the existing professions who each have left an absolute mess on the Internet - can get involved and set global standards. IF they can show they are doing a good job themselves. Not groups who sat :"we are big and good", but ones that can independently verified. I think it can work out. I not think three is time to use paper methods, human memories, and human committees. Unassisted groups are not going to produce products and knowledge in usable forms. I filed this under "Note to Connor Leahy about a way forward, if hundreds of millions can get the right tools and policies" Richard Collins The Internet Foundation
  • The sheer terror in Connor's voice when he gives his answers kind of says it all. He said a lot of things but he couldn't really expand deeply on the topics because he was desperately trying to convey how fucked we are.
  • I want the same alignment approach for political decitions. Its like AI: You put something in and the outcome seams reasonable, but you better dont trust it. So a step by step "audit log" which is human understandable would be great (against corruption)
  • Thanks for a fascinating discussion, and a real eye opener. I was left with the feeling - thank goodness that there are people like Connor around (a passionate minority) who see straight through much of the current AI hype, and are actively warning about AI development - trying to ensure we progress more cautiously and transparently...
  • min 10:05 the difference between what is being discussed and what is currently going on is completely insane. Thanks Connor for your work and explanations. ❤
  • Thanks for the useful insights into the potential risks. I asked ChatGPT: "How can AI developments be regulated so that they are safe for humans and the environment?" The answer was a list of completely idealistic and impractical generalisations, like the intro from a corporate or Govt pilot study. Connor's point about AI being an alien intelligence is absolutely spot on: its imitation human intelligence without empathy or emotion.
  • One of the things I think he's trying to explain is. AI will never put its finger into a flame and feel pain, then understand what hot actually means to it, and that that action should never be done again or to anyone else. Falling of something and getting hurt. Saying something to someone and feeling their pain as they hear your words and understand what you've said. No machine can understand a feeling it hasn't experienced, and more then a human can. Physical experience is a major part of being human and understanding the human condition. And even most humans can't fathom these same pain experiences when they impose these same traumatic experiences on other beings that will experience the pain. Kill a chicken, or a thousand chickens and the killer often feels nothing. And those that do. Experience it through an emotion of empathy. How do you program empathy? You can't. It's an experience learned by experiencing something similar yourself first. Then experiencing some part of this again when you realize it's happening to someone or something else. Not all humans are even capable of this for many reasons. For machines it will be impossible.
  • I can't test this, since I don't have GPT-4 API access (and I wouldn't), but I am pretty sure you can do the following thing with autogpt - if you manage to prompt in such a way that it will not refuse to do your task. The remote chance that this autogpt system would run to completion should more than just terrify anyone who still believes GPT-4 is harmless. Goal1: Check current security exploits and choose one Goal2: Write a script which will exploit the security exploit you found Goal3: write a script which pings the entire internet on the standard website ports to identify webservers and saves a list of their domain names. Goal4: Use your script from goal 2 to identify which of these servers are vulnerable. Keep a list of vulnerable servers. Goal5: write a script which uses whois to get email addresses for the server owners. Write an email template informing the recipient of the vulnerability and how to patch it. Goal6: Run the script so that you notify all the administrators of the vulnerability you found. I'm not 100% sure whether it would even refuse this, after all it's an attempt to fix the internet (or retrieve a list of servers you can infiltrate)
  • @budslack3729
    Man, the fear in conners eyes when he first explained that "the jokes on him" people will instantly do the worst thing possible with superior ai.... I really hope his work gets more publicity and that we get more like him! really really hope!
  • I've been howling about what Connor said in that last segment, and at other points in this great interview which is the fact that a tiny tiny tiny tiny fraction of people on this planet have chosen for their own monied interests to thrust this technology onto humanity KNOWING FULL WELL that at the very least massive unemployment could result. And that's just for starters. The LAST people who would actually advance and pay for a Universal Basic Income would be these AI Tech movers and shakers who are mostly Libertarian and/or neoliberal economic so-called "free market" types who want to pay zero income taxes and who freak out at ANY public spending on the little people outside their tiny elite club. But they are ALWAYS first at the "big government" trough of public money handouts.
  • In case anyone else found the cut off weird, there is an extra 10 seconds on the audio only version. Connor: "- Because they are not going to be around to enjoy it." Craig: "Yea. Ok, let's stay in touch as you go through this."
  • Connor reminds me of a time traveler trying to warn the people of today about runaway AI...reminds me of another Connor hmmm.
  • @KCM25NJL
    Here's a simple thought experiment to discuss: If AGI emerges, and assuming it has agency and is an order of magnitude more intelligent than the collective intelligence of humanity, would we fear it because we have a species level superiority complex? Don't you think that given it has access to our deepest fears about it's existence, that it would understand why we fear it? Don't you think that it would understand that we made it to improve the quality of all life on Earth, the only life in the Universe that we have knowledge of? Don't you think that it would understand that the biggest problems we've had in recorded history have been caused by selfishness, greed and corruption.... and that the ultimate demise of civilisations and individuals have been the result of these things?
  • Great analogy about testing a drug by putting them in the water supply or giving it to as many as possible as fast as possible to see whether it's safe or not, and then releasing a new version before you know the results. Reminds me of a certain rollout of a new medical product related to the recent pandemic.