Why, robot? Can we teach AI to be ethical?
Sunday 23 October, 14.00 - 15.30 , Frobisher Auditorium 1 Technology and ethicsThe victory of Google’s Deepmind AlphaGo over the human Go world champion at the start of this year – a decade earlier than expected – has focused attention on the development of artificial intelligence. Increasingly questions are not just being asked about the ethical issues raised by the use of automation, but also the robots’ own ability to demonstrate moral reasoning. Infamously, Microsoft’s experimental chat bot, Tay, was gamed by internet users to spew out racist and offensive abuse. Researchers are investigating how ‘killer robots’ (Lethal Autonomous Weapons) – already in use by leading militaries – can effectively judge between combatant and civilian targets. In response to these concerns, leading Silicon Valley entrepreneurs including Elon Musk and Peter Thiel have founded the OpenAI Institute to ensure the ethical future of artificial intelligence is ‘beneficial to humanity’. Some have suggested robotics needs its own 'precautionary principle' like the one
often applied to science.
The issue becomes most immediately pressing, however, with the potential development of the driverless car. The trolley problem – or ‘fat man dilemma’ – is a classic ethical thought experiment, asking whether it is better to kill one person deliberately in order to save the lives of five others. For ‘autonomous’ systems, at the present stage of sophistication, such decisions will need to have been formulated in advance: requiring developers to effectively codify open moral questions. Yet while this topic has energised technologists and ethicists alike, some more sceptical commentators observe that this problem represents a serious barrier to the promised wider adoption of AI in the
near future: despite the hype, autonomous technology remains limited in comparison to the human brain.
Why does the ethical debate around robotics seem so much more advanced than the technology itself? Does the speed of AI development necessitate important questions about how it is constructed, or is there a risk projecting assumptions and contemporary concerns on future tech? Do we need ethics councils for robots, or does this risk fostering a climate of mistrust in nascent technology? Who should we look to in making decisions over the morality of machines?

journalist, writer & broadcaster; presenter, Futureproofing and other BBC Radio 4 programmes; author, Big Data: does size matter?

director, Sheffield Robotics; professor of Cognitive Neuroscience, University of Sheffield; director of Research, Consequential Robotics

research associate, Future of Humanity Institute; Oxford Martin senior fellow; Oxford Martin School, Oxford University
podcast producer & journalist, the Guardian
Can we trust robots to make moral decisions?, Olivia Goldhill, Quartz, April 2016
Machine Morality: Computing Right and Wrong, Sherwin Yu, Yale Scientific, May 2015
How Will Driverless Cars Make Life-Or-Death Decisions?, NETNebraska, May 2016
How to Raise a Moral Robot, Bertram Malle, LiveScience, March 2015
The driverless car and the fall of man, Norman Lewis, Spiked, March 2015
SHARE THIS SESSION