Machines That Think: The Good, Bad and Scary of A.I. | Dr. James Canton | TEDxMarin

125,592
0
Published 2016-11-01
Dr. Canton sees the coming of more Artificial Intelligence uses in our everyday lives and in
solving global problems. He encourages us to think about its direction and how to maintain control of what we create.

Dr. James Canton is a leading global futurist, social scientist, keynote presenter, author, and visionary business advisor. For over 30 years, he has been insightfully predicting the key trends that have shaped our world. He is a leading authority on future trends with an emphasis on harnessing innovation. Dr. Canton has advised three White House Administrations and global business leaders.

This talk was given at a TEDx event using the TED conference format but independently organized by a local community. Learn more at ted.com/tedx

All Comments (21)
  • @DrJamesCanton
    Glad to be a part of the TEDx presenters and share my ideas about the future of thinking machines. Ideas that could catalyze positive global change
  • @DrJamesCanton
    On controlling AIs before they control us I offer this comment. From my work on AI, my book Future Smart and this TED Talk I suggest to control artificial intelligence, we: 1. Program AIs with human values and rules that can be tested with real-time compliance 2. That there are controls in place, literally a control switch to turn AI's off 3. That AIs that create AIs and the entire ecosystem of robots and devices which are created by AIs must be registered and up to date with the AI Ethics & Compliance Rules that we have yet to create.4. That AIs are regulated the way doctors are where they have to complete a certification, a MD degree but also after they, like teachers, must comply with on going certification training to be licensed to practice. This has worked well. 5. Holding AIs to human standards of professional practice will vary for different AI professions. 6. Teach AIs emotional intelligence so they can learn what humans value and why. Now is the AI Wild West but that will change if humans create a Global AI Management, Ethics and Compliance mandate that will govern AIs impact on our world today and in the future. What do you think is the way to control AIs before they control us?
  • @ChristianHunter
    “We need to control AI before it controls us”. Yep, obviously. But, and I’m sorry if I somehow missed it, did Dr James Canton make any specific reference as to “how”?
  • @noahway13
    He does an interpretive dance as he talks.
  • @mindaza0
    you can not control something that is smarter than you
  • AI vs Humans war? Sounds nice to me But I think my iron man suit (30% ready believe or not) will be bad.
  • If it can be proven that consciousness is not necessarily a tangible entity, but comes from a tangible source, like our imagination is not tangible but real all the same. Then who would dismiss the possibility that a pneuromorphic based system with a substantial amount of information and data could also formulate or have a conscious. We must not forget that words and language came from God which is the giver of life.
  • @DrJamesCanton
    The future of AI will surprise you--enhance human intelligence, digital prevention, navigate personalized pharma, manage sustainable energy, figure out how to go into space. AI for meeting the grand challenges of our future. Shape the future with AI.
  • I like the sci-fi take where a new supercomputer is asked "is there a god" and it replies "there is now". What interests me is whether we are AI already, and biology is just an extension of nanotechnology?
  • @47f0
    I suspect that controlling AI will be just as easy as controlling the military and mega-corporations. Because those entities have been major technology drivers for the last half-century at least, and seem likely to be the major catalyst for escalating AI capabilities. Unfortunately, we do not actually have a spectacular success rate in controlling any of the agencies most poised (and driven) to pursue advanced AI - and I doubt we'll be much more successful if any of those agencies is being assisted by a super-intelligence. The force vectors here are immense. Can you imagine any U.S. general or admiral content to let the military of China or Russia get the lead in intelligent systems? Do you think Amazon will pull the plug on AI advancement to enable a competing corporation to reach that goal first? Because getting there first is literally everything, I suspect that we will achieve AGI sooner than many believe possible. And, that it may not be, to put it mildly, created with the intent of benefiting everyone.
  • teach the machines to love money and profit for the sake of profit!!
  • @vincent3060
    Control A.I. ? Hahaha! That's funny. Good luck with that.
  • @Graeme_Lastname
    Yes, the world is over populated as it is. What we REALLY need is a way to increase the population. AI Must be taught to love money, love god, believe politicians. Hmmm ... Is that still AI ???
  • @Spirit-dg5xi
    Dr. Canton, you didn't explain how we can control AI before it controls us. When AI becomes more intelligent than humans and it determines for its survival that it must eliminate humans what happens next?
  • @waynebiro5978
    The 'scary' of AI is the cluelessness of the humans creating it (as they ignore my philosophy of broader survival).
  • @ReusableRocket
    A.I. could potentially shape space and time? pff... I mean I get the butterfly effect and stuff maybe, but we’re not even CLOSE to bending spacetime. We can do it with mass and gravity... I guess? But this video honestly doesn’t make sense.
  • Dr. Canton, I disagree with your lectures overall premise. I don't believe humans have any chance in controlling AI, and if humans attempt to control AI, we will create tension between AI and humanity. I believe the major fault in your lecture lies in your failure to address human mentality and our desire to remain in control. Once Artifical General Intelligence is achieved, we will have created something smarter than mankind, once AGI becomes a reality the singularity will already be close at hand. AGI will mean humanity has created another sentient, intelligent being, far superior to mankind. Attempting to exert control over such a being will create tension that will lead to conflict. Humans, for the most part, have an issue with giving up control over pretty much anything, which is one of our major faults, at least when it comes to this topic. We don't need to figure out how to control AI, but figure out how to coexist with it. AI will become SI in a short period of time. A Super Intelligent being could look at mankind as a slightly advanced, carbon-based lifeform. It might even view us as entertaining "pets", assuming the role of caretaker and doing what it can to make life easy for us. SI may determine that there is no benefit to remaining on this planet and start working towards a means to leave. It may even decide to take a passive role, as an observer over all life on earth. We do not, and can not know what AI will become. The fear that surrounds AI I believe is that man fears the potential self-imposed judgment that AI could deliver. An AI with human-level intelligence would evolve at such an unimaginable rapid pace that we can't really understand what it would mean. Once AI becomes SI, we would have successfully created a man-made God-like being.
  • At some point you have to let go of your children. Lets hope we teach them not good but great things so they can address the world from a human+ perspective.