Fans of Battlestar Galactica are avidly following the brand-new “prequel” series,Caprica, which explores the genesis of the Cylon race that is created by, and then rebels against, their human creators. The series’ technical script consultant, Malcolm MacIver, is an ideal person to provide insights on a fictional world that grapples with the implications of human consciousness, virtual worlds, robotics, and artificial intelligence.
MacIver is a researcher at Northwestern University whose specialty is studying the interconnections between the brain and biomechanics — in other words, how our physical body influences the development of our cognition, particularly when it comes to gathering sensory information about the world around us and processing it. Among other projects, he builds biomimetic robotic fish to learn more about how the real creatures use weak electrical discharges to track prey in their environment, for example.
On Caprica, Zoe’s virtual avatar is grappling with just these issues in her shiny new war robot body. And her father is struggling to perfect the AI on that advanced robotic soldier. This is where science fiction takes its inspiration from real-world research. In his latest blog post over at Science and Society, MacIver talks about meeting Peter Singer, author of Wired for War:
Robotic warfare, as we all know from media reports about drones, is of rapidly growing importance. It is based on research funded by a number of US government military research agencies. Singer (a defense analyst at the Brookings Institute, not the controversial ethicist from Princeton) is not prescribing an end to the development of such robots. Instead, he wants a conversation to begin about how we deal with issues of culpability that arise when the robots we develop make an independent, and faulty, decision to end a human life.
This brings me back to Cylons, and Caprica, a show that envisages a time when robots develop the capacity to be self-aware, make independent decisions to kill, and eventually collude to rebel against us. What is the likelihood of something like this scenario eventually occurring? Will we eventually have to grant moral rights to our inventions, perhaps to avoid such a rebellion? Will our mechanical intelligences supersede us? These are clearly highly speculative questions, more commonly the stuff of science fiction plots than sober consideration. But with the rapid rise of robotic warfare, and the push to make it ever more autonomous and lethal, they warrant a new look.
MacIver promises he’ll write a few more posts exploring such quandaries and considering “some of the more speculative questions that are triggered by the conjunction of the real world of robotic warfare, and the fictional world of Capricaand its resentful robotic warriors.” We can’t wait!