During last week’s hardware/software/wetware seminar I mentioned an interview on CBC radio with Nick Bostrom. Bostrom’s TED bio states that “Since 2005, Bostrom has led the Future of Humanity Institute, a research group of mathematicians, philosophers and scientists at Oxford University tasked with investigating the big picture for the human condition and its future. He has been referred to as one of the most important thinkers of our age.”
There are many ways in which his discussion with Anna Maria Tremonti aligns with the discussions we have been having about the implications of computation for our society and the potential for artists to open up other forms of understanding these processes and structures. Bostrom expresses some very compelling warnings in this interview against ignoring what super intelligence might mean for humanity and the world as we know and experience it.
In his TED talk on the same theme, I was interested in his advice against anthropomorphizing AI—essentially associating AI with embodied, human forms. We try to make sense of AI by assigning it human qualities and responses, but this does not reflect what AI is necessary about. My own fascination with the recent film Ex Machina certainly has something to do with this.
and of course this short doc “Examining Our Fear of Artificial Intelligence”:
Bostrom asserts that AI should be understood as an optimization process that steers the future in a particular direction. It must not be given poorly conceived or unspecified goals if we want AI to benefit humanity in the long term. So the question remains—how and what should those specified goals be articulated and embedded in the instructions given to the AIs and future super intelligents?
Add yours Comments – 1
This ties interestingly back to Isaac Asimov’s laws of robotics which he developed early on in his science fiction novels, starting in 1942:
– A robot may not injure a human being or, through inaction, allow a human being to come to harm.
– A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
– A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.
Then later on he narrates the two main robotic characters coming to the realization that a “zeroth law” is needed, superseding the first three when necessary:
– A robot may not harm humanity, or, by inaction, allow humanity to come to harm.
Developing a “zeroth law” type of global empathy, which transcends our self interest, and goes beyond our family, circle of friends, country… to truly encompass all of the planet, is a challenge we struggle with as humans already.
Many of Asimov’s robot stories are variations on loopholes or ambiguities undermining these laws – issues of “incomplete or poorly specified” directives as in Bostrom’s examples of the electrode induced smile, or King Midas’ golden touch. Tales where wishes turn out to be not be quite what was expected also abound.
Be careful what you wish for…