Don't Trust the Promise of Artificial Intelligence

Next Debate Previous Debate
ArtificialIntelligenceDebate 239x398

Wednesday, March 9, 2016

As technology rapidly progresses, some proponents of artificial intelligence believe that it will help solve complex social challenges and offer immortality via virtual humans. But AI’s critics say that we should proceed with caution. That its rewards may be overpromised, and that the pursuit of superintelligence and autonomous machines may result in unintended consequences. Is this the stuff of science fiction? Should we fear AI, or will these fears prevent the next technological revolution?

  • AndrewKeen 90px

    For

    Andrew Keen

    Internet Entrepreneur & Author, The Internet Is Not the Answer

  • JaronLanier90px

    For

    Jaron Lanier

    Computer Scientist & Author, Who Owns the Future?

  • JamesHughes 90px

    Against

    James Hughes

    Executive Director, Institute for Ethics and Emerging Technologies

  • MartineRothblatt 90px

    Against

    Martine Rothblatt

    Transhumanist, Entrepreneur & Author, Virtually Human


    • Moderator Image

      MODERATOR

      John Donvan

      Author & Correspondent for ABC News

See Results See Full Debate Video Purchase DVD

Read Transcript

Listen to the edited radio broadcast

Audio clip: Adobe Flash Player (version 9 or above) is required to play this audio clip. Download the latest version here. You also need to have JavaScript enabled in your browser.

Listen to the unedited radio broadcast

Audio clip: Adobe Flash Player (version 9 or above) is required to play this audio clip. Download the latest version here. You also need to have JavaScript enabled in your browser.


92Y GeniusFestival 200pxWeb

This event is co-presented with 92nd Street Y, a world-class cultural and community center where people all over the world connect through culture, arts, entertainment and conversation. A featured program during the 7 Days of Genius festival, a week-long inquiry into the nature of genius — what it is, why it matters and how it impacts the world.

Subscribe to the Podcast
AndrewKeen 90px

For The Motion

Andrew Keen

Internet Entrepreneur & Author, The Internet Is Not the Answer

Andrew Keen, a renowned commentator on the digital revolution, believes 21st century machine intelligence may be the greatest challenge to the human species in history. Keen explores the current state and forecasts the future of artificial intelligence (AI), laying out the long-term economic implications of smart machines, particularly on human jobs. Keen is the author of three books: Cult of the Amateur, Digital Vertigo, and The Internet Is Not The Answer, which the Washington Post called "an enormously useful primer for those of us concerned that online life isn't as shiny as our digital avatars would like us to believe." Keen is executive director of the Silicon Valley innovation salon FutureCast, the host of the popular Internet chat show Keen On, a senior fellow at CALinnovates, a columnist for CNN, and a much acclaimed public speaker around the world. In 2015, he was named by GQ magazine in their list of the "100 Most Connected Men.”

Learn more

JaronLanier90px

For The Motion

Jaron Lanier

Computer Scientist & Author, Who Owns the Future?

Jaron Lanier is a computer scientist, author, and composer, best known for his work in virtual reality research, a term he coined and popularized. A widely celebrated technology writer, Lanier has charted a humanistic approach to technology appreciation and criticism. He is the author of the award-winning, international bestseller Who Owns the Future?, as well as You Are Not a Gadget. He writes and speaks on numerous topics, including high-technology business, the social impact of technological practices, the philosophy of consciousness and information, Internet politics, and the future of humanism. Included on Encyclopedia Britannica’s list of history’s 300 or so greatest inventors, Lanier has also been named one of the 100 most influential people in the world by Time, one of the 100 top public intellectuals by Foreign Policy, one of the top 50 world thinkers by Prospect.

Learn more

JamesHughes 90px

Against The Motion

James Hughes

Executive Director, Institute for Ethics and Emerging Technologies

James Hughes, PhD, is the executive director of the Institute for Ethics and Emerging Technologies. A bioethicist and sociologist, he serves as the associate provost for institutional research, assessment, and planning for the University of Massachusetts Boston. He is author of Citizen Cyborg: Why Democratic Societies Must Respond to the Redesigned Human of the Future, and is working on a second book tentatively titled Cyborg Buddha. From 1999 to 2011, Hughes produced the syndicated weekly radio program, Changesurfer Radio. A fellow of the World Academy of Arts and Sciences, he is also a member of Humanity+, the Neuroethics Society, the American Society of Bioethics and Humanities, and the Working Group on Ethics and Technology at Yale University. He speaks on medical ethics, health care policy, and future studies worldwide. Hughes holds a doctorate in sociology from the University of Chicago, where he taught bioethics at the MacLean Center for Clinical Medical Ethics.

Learn more

MartineRothblatt 90px

Against The Motion

Martine Rothblatt

Transhumanist, Entrepreneur & Author, Virtually Human

Martine Rothblatt is the chairman and CEO of United Therapeutics, a biotechnology company, and the author of Virtually Human: The Promise – and Peril – of Digital Immortality. The highest-paid female CEO in the U.S., Rothblatt is a transhumanist, well known for creating BINA48, a cyborg of her wife. Previously, as an attorney-entrepreneur, she was responsible for launching several satellite communications companies, including SiriusXM, where she served as chariman and CEO. In the 1990s, she entered the life sciences field by leading the International Bar Association’s project to develop a draft Human Genome Treaty for the UN, and by founding United Therapeutics. Rothblatt's inventions transcend information technology and medicine, and most recently include an Alzheimer's cognitive enabler that uses mindware to process mindfiles so that a mindclone of a person's consciousness results. The potential and ethics of this technology are described in her latest book, Virtually Human. She is also the author of books on satellite communications technology, gender freedom, genomics, and xenotransplantation.

Learn more

Declared Winner: For The Motion

Online Voting

Voting Breakdown:
 

49% voted the same way in both pre - and post-debate votes(24% voted FOR Twice, 20% voted AGAINST Twice, 5% voted UNDECIDED Twice). 51% changed their minds (16% voted AGAINST then changed to FOR, 19% voted UNDECIDED then changed to FOR, 3% voted FOR then changed to AGAINST, 6% voted UNDECIDED then changed to AGAINST, 2% voted FOR then changed to UNDECIDED, 5% voted AGAINST then changed to UNDECIDED)| Breakdown Graphic

About This Event

Event Photos

PrevNext Arrows
    PrevNext Arrows

    19 comments

    0|-
    • Comment Link Frank L. Wednesday, 27 April 2016 23:07 posted by Frank L.

      AI will be the next revolution in technology and thus cannot be lumped together with current technologies because of its self-aware nature. As dangerous as current technologies can be, they still need to be operated by humans unless programed to do otherwise. But with AI, it can make decisions for itself, and thus can very well see humans as undesirable elements to be removed due to its logic being unhindered by morality and ethics. From the unlimitless nature of human greed to humans' arrogant sense of self-worth in light of AIs that will very likely be superior to us in all the ways that matter, we are definitely not ready for an AI future at this time.

    • Comment Link Ed C Sunday, 10 April 2016 15:41 posted by Ed C

      Where was Bostrom, or anyone that could represent that view?

      Lanier seems to argue that because intelligence and consciousness are inadequately defined engineers need not consider AI ethics. For Lanier, AI engineers have no special ethical considerations, only functional ones.

      Very confusing speaker choice and team assignments.

    • Comment Link Mark Tuesday, 29 March 2016 10:18 posted by Mark

      Just enjoyed a compelling hour of podcast "radio" listening to iq2us.org, debating the proposition, "Don't trust the promise of AI." The debaters' expertise overcame a seemingly boring topic.
      One beautiful argument described AI as already happening, as when Google tells me it's time to leave for the airport. We are not impressed when computers optimize our route, play chess, fly airplanes, compute square roots or drive cars because we can, with diligent effort codify (literally) those things and once laid bare to inspection the skill ceases to impress. Somehow we remain impressed with ourselves, though we seem nothing more than generalists, an amalgam of those features bolted to a multi-purpose actuator with opposable thumbs. (And hey, how many of those things can YOU do? I'm saying, it's already time to be impressed.) Failing to recognize the already manifes emergence of AI is tantamount to accepting the mystery of consciousness. That we do not yet credit the achievement is proof that the field of AI has "lost it's moorings to fantasy" waiting for Hal to seal the airlock before we'll grant the medal of consciousness. And we want it to just happen, to Emerge with a capital E. Harrumph. I think it more likely that we'll accept Siri as a girlfriend by creeping inches, unconsciously, until it's a fait accompli, without ever having been properly debated.

      To decide whether and how to go forward, maybe we should look back... "Our problems are of our own making" was another of the arguments made. It's true; we don't fear famine, locusts, lions. Poverty, cruelty, and injustice, other things humans do though - THOSE are things to worry about. To decide how to go forward, we should look back: with that retrospective, we see everyone losing their jobs to technology, and this fear carried the day. Disappointingly, the resolution carried, and I think that's why, and I don't think "losing their jobs" is the right way to look at it.

      Taking the long view, it is certainly the case that computers, science, and machines have, taken the hard jobs of hauling and farming and digging and long division that we would prefer not to do. The argument came down to fear of technological unemployment, and the proper rebuttal (which *was* made) is that ownership of the largess of the industrial age is a social problem, not a technological one, and hence not a reason to un-invent the steam hammer. ...or to resolve not to invent the next one.

    • Comment Link Sander Gusinow Thursday, 24 March 2016 08:08 posted by Sander Gusinow

      In a roundabout way, I think this debate leads into another, perhaps more relevant debate: "Scandinavian socialism is the way of the future."

      I would totally love to see this one. Make it happen, Donvan!

    • Comment Link Easton Smith Sunday, 13 March 2016 14:08 posted by Easton Smith

      I've been excited about this debate for months, but I found both sides massively disappointing.

      For a fan of the field and the question, it was borderline intolerable.

      It didn't live up to intelligence squared usual standards.

      Bring in Nick Bostrom next time.

    • Comment Link Damir Olejar Saturday, 12 March 2016 22:49 posted by Damir Olejar

      The AI is correlational, science is causal. Mixing up the two means that there exists a fallacious premise. Therefore this debate is a debate about those premises. Just like cartoons, this debate is useless, but very very entertaining.

    • Comment Link Michael Nevins Saturday, 12 March 2016 13:26 posted by Michael Nevins

      The real problem with AI is the lack of a compact human like cognitive-robotic model and that requires perceiving actual biological structure (static) and its associated functional process (dynamic).
      Google obviously figured out (or guessed) the first requirement of perceiving our brains biological structure by using 2 neural nets. Now let's see how well they understand the functional dynamics of dual modeling on a central object model.

      Let's see AlphaGo a go go go

    • Comment Link Jonathan Saturday, 12 March 2016 02:18 posted by Jonathan

      This was on Linkedin Post The reports is IMO excellent:
      https://www.linkedin.com/pulse/ai-future-civilization-peter-alexander-denega?trk=pulse_spock-articles

    • Comment Link Michelle Shevin Thursday, 10 March 2016 11:22 posted by Michelle Shevin

      Before we create a new intelligence in our image, we have to reconsider the fundamental lie that enables human civilization: that nature and culture are separate and distinct, rather than neighbors on the same continuum.
      At 1:17:00 in the video I refer to a Bruno Latour quote which I think is fundamental to this debate. The full quote is this: "Instead of two powers, one hidden and indisputable (nature), and the other disputable and despised (politics), we will have two different tasks in the same collective. The first task will be to answer the question: How many humans and nonhumans are to be taken into account? The second will be to answer the most difficult of all questions: Are you ready, and at the price of what sacrifice, to live the good life together? That this highest of political and moral questions could have been raised, for so many centuries, by so many bright minds, for humans only without the nonhumans that make them up, will soon appear, I have no doubt, as extravagant as when the Founding Fathers denied slaves and women the vote...There is a future, and it does differ from the past. But where once it was a matter of hundreds and thousands, now millions and billions have to be accommodated—billions of people, of course, but also billions of animals, stars, prions, cows, robots, chips, and bytes... That there was a decade when people could believe that history had drawn to a close simply because an ethnocentric—or better yet, epistemocentric—conception of progress had drawn a closing parenthesis will appear as the greatest and let us hope last outburst of an exotic cult of modernity that has never been short on arrogance."

    • Comment Link jdgalt Wednesday, 09 March 2016 23:46 posted by jdgalt

      Before we can even talk intelligibly about this stuff we need new terminology that can distinguish stunts such as computers that win at chess or Jeopardy from real self-aware intelligence in a computer.

      The former is not really as special as it's being marketed as being. The latter is probably much farther away than its proponents assert, but is close to being IMITATED well. It's important not to believe in these fakes and grant them rights like voting, since they will really be nothing but the puppets of their inventors.

    • Comment Link Bruce K Wednesday, 09 March 2016 19:12 posted by Bruce K

      I think when they talk about Google "learning" the game of Go it is more than a misnomer ... it is kind of like false advertising, or even hype.

      Can the Go program explain what it learned? No. Can it teach someone else how to play Go? No. It has just learned cold, blind probabilities, not the consolidation of understanding necessary to play a game. It has no idea what it is going or why, and it cannot choose to do something else.

    • Comment Link Tyler Wednesday, 09 March 2016 14:26 posted by Tyler

      Killer robots is just silly. Bostrom-style paperclip maximizers is sophistry; and this problem is nipped in the bud during algorithm development for a system to be functional in the first place.

      Economy / automation - now there's a real argument. Truck/cab drivers will be hurtin' in less than 5 years! But listen, it's like favoring maintaining man-hour expenditure for its own sake. If you get the same output for less work (and fewer workers), isn't that a win? There's talk of "universal basic income" (look at Finland) which will certainly be more than talk once AI hits full-force. Take the industrial revolution as an example. Sure it displaced farmer jobs; but their children grew up for the better, working 9-5s instead of 16h days. The problem with AI - the second industrial revolution - is that it will displace jobs much more rapidly, and the modern generation will have to learn new skills (rather than our children); so it will be a more noticeable impact. But again, if we have robots fanning us and feeding us grapes for free (see lights-out farms and factories, fully electric & autonomous delivery) then why do we need to make money anyway?

      Here's what'll happen. People will lose jobs, and it'll hurt. We'll start taking universal income seriously, and institute it in short order. To make more than that, gotta learn new skills. Small but painful hiccup, followed by leisure living.

    • Comment Link Ronnie L. Monday, 07 March 2016 20:11 posted by Ronnie L.

      Goedel's incompleteness theorem precludes humans from completely understanding human intelligence.

      Therefore the most likely outcome of AI research is an intelligence which is malformed; perhaps to the point of what in a human would be called mental illness.

      It may not be apparent. The mentally ill machine could still be high functioning while having hidden flaws which would represent a danger to everyone.

    • Comment Link Chris T Sunday, 06 March 2016 11:03 posted by Chris T

      Automation technology together with AI is wonderful & promises to free us so we may explore our fullest human potential.

      BUT only in the context of a socioeconomic system that ameliorates job displacement, crime, corruption, poverty, environmental degradation & war. Capitalism and the use of money needs to be retired.
      Why?
      Because as we increasingly optimize technical efficiency throughout the market with corresponding advances in AI there would be virtually no basis for maintaining human employment, you would make things so cheap that people's sustenance would require almost no income and the whole economy would come crashing down.

      The best proposed feasible socioeconomic system I'm aware of today is a "Resource Based Economy", it is unlike any socioeconomic "ism" that has gone before it, NO MONEY. Please before viewing this debate view the documentaries "Paradise or Oblivion", "Future By Design", "The Choice Is Ours 2016" on YouTube by 100 year old Jacque Fresco the visionary of "The Venus Project". https://www.thevenusproject.com/

    • Comment Link Edward Tomchin Friday, 04 March 2016 13:25 posted by Edward Tomchin

      If we let fear stop our technology march into the future, then we'd have dropped Einstein and E=MC2 like a hot potato. Yes, it's true we tend to take any new technology and use it for bad purposes at first, but then we right ourselves and create a better world. Fear is the least of all reasons for not exploring the new and unknown.

    • Comment Link Michael Oghia Thursday, 03 March 2016 08:41 posted by Michael Oghia

      AI is an incredibly interesting but potentially dangerous area of technical, ethical, and scientific exploration. In order to prepare for this debate, the following post (parts 1 and 2) from the blog Wait But Why is an absolute must: http://waitbutwhy.com/2015/01/artificial-intelligence-revolution-1.html.

    • Comment Link Jess H. Brewer Wednesday, 02 March 2016 23:47 posted by Jess H. Brewer

      Your motion presupposes that AI can only be developed by huge corporations with vast resources, which might be susceptible to "adequate planning and foresight". What if every ingenious hacker is already working on some AI design? What if many different implementations achieve success? How do you plan to "control" the results? You are not driving a car down the highway, you are behind the wheel of a runaway truck hurtling down the mountain with no brakes. Shouting, "Let's not do this!" won't help.

    • Comment Link Mark Walker Wednesday, 02 March 2016 19:10 posted by Mark Walker

      First, "AI" is a severely abused term that intelligent people have been wrong about for 60 years since the term was coined by John McCarthy in 1955. He defined it as "the science and engineering of making intelligent machines". What they meant then, is what we call General, or Strong AI these days. Machines that rival humans in their thinking abilities.

      By that definition, Watson playing Jeopardy, and the iPhone assistant Siri, aren't even close to AI. Turns out they aren't even weak AI. They are just really good statistical analysis programs.

      The AI academic community has been falling so short of the original goal they've had to take tiny wins here and there to give us autonomous navigating vehicles, OK speech recognition, virtual financial advisors, ...

      IBM now calls what we are doing now, "Cognitive Science".
      Well, maybe, but they are still referencing thinking in their definition of the target result. Still sounds like "real" AI and another flop to me.

      This Twitter timeline segment says it all for me:

      Mark Walker ‏@neutronneedle · Jan 16

      2390: Human-level AI cracks 37 IQ

      John Walker @Fourmilab
      2052: Human-level AI
      2054: Human immortality and uploading
      2073: Interstellar travel
      2387: Firefox memory leaks fixed

      AI, bah.

    • Comment Link bruce k Wednesday, 02 March 2016 18:11 posted by bruce k

      Seems kind of clear that humans cannot even define intelligence, let alone create it, nor can we motivated it or imbue it in ourselves. It seems to be a series of rote rules with some kind of scheduler and amalgamator working in the background that allows us to load associated models of different realities into our conscious minds for conscious thought, or to use intuitive ideas we accept for a faster response. It seems very likely since we have been producing more dysfunctional people and societies as time progresses that we stand a large chance of unleashing something like a cancer consciousness where they only thing that would save us is our own incompetence of design. Woe be to us though if whatever we created evolved to be smarter than us even by just a little. I just do not think it is a likely possibility in the foreseeable future.

      Most of what I see refreshing this meme is from people who are writing and trying to sell books about it, that are, at least from the ones I have looked at, very skimpy on facts, and not very informative.

    Leave a comment

    Make sure you enter the (*) required information where indicated. HTML code is not allowed.