Avengers: Age of Ultron opened Friday and we recently got a chance to speak to Jeff Kahn, professor of bioethics and public policy at Johns Hopkins University, who worked on the film. Kahn provided some amazing insight into artificial intelligence, the work he does, meeting with the Avengers group in London, and his feelings about where we are going in terms of ethics in science.
The Science & Entertainment Exchange: Maybe you could start off by telling me a little about what you do.
Jeffrey Kahn: So, Johns Hopkins has an institute for bioethics, which is the study of ethical issues in biomedicine, broadly construed, so health care, medical research, science policy. I work mostly around ethics in a public policy context, as you can tell from my title. I mostly focus on ethics of research and ethics of emerging new biomedical technologies from organ transplant, which is obviously not so much new anymore to the stuff that’s going on now with gene editing of embryos and such. And I’m a philosopher by training. I have a Ph.D. in philosophy and an undergraduate degree in molecular biology from UCLA (I’m a Californian by birth). And I have a master’s degree in public health, too, so I have a kind of eclectic background on a lot of work in this field because it’s sort of a multidisciplinary field of study.
The things I tend to work on are a lot of the new and emerging areas; a lot of genetics and I do a little bit of work in the use of animals in research. I’m working a lot on genetics in a public health context at the moment. There are others of us—this is a fairly large institute for bioethics at least—and we have people working on everything from end-of-life issues for individuals to things like local food ethics. So we have a really wide-ranging set of scholars, and we are just about to launch a new graduate program—a master’s program in bioethics that we’re running that will hopefully start admitting students in the fall.
Exchange: Well, it seems like it is a much bigger deal than it used to be. You know, it’s funny, because I know we’re going to be talking about Avengers, but I just watched Ex Machina, and just thinking about artificial intelligence (AI) and the ethical issues with that. It just seems like something that was not as important a number of years ago.
Kahn: You’re right. I finished my doctoral training in 1989 and that was not quite the first cohort, but pretty close to the first cohort. People were specially trained at the graduate level in bioethics. Up to that point, there had been people who were trained in other things; they were either physicians or lawyers or philosophers or theologians, even, who were working to help hospitals or individual patients deal with particular cases, but it really was not an organized field until the mid- to late 1980s, which is young by academic field standards. A lot of it is driven by the technology, of course, and that is part of the story. I don’t know if your colleagues told you that I was talking with Ron Howard about the forthcoming Dan Brown movie that they’re doing.
Exchange: Oh! What did you talk about with that?
Kahn: Well, I don’t know what I’m supposed to say in regard to what the movie is, because that is a secret. But it goes with the recent Dan Brown book, Inferno, which is about sort of an evil genius billionaire who decides that the only way to save the world is to eradicate half of the world’s population by unleashing a virus. And the whole thing goes around Dante’s Inferno, because all of the Dan Brown books are about symbology.
And so, they were interested, from an ethics perspective, about the evil genius view of things, such that it could be both evil and sort of genius. So, it was an interesting conversation with Ron Howard and the screenwriter and the actor they had just signed to play that character. It was really quite fascinating for me, too, I’d have to say, to listen to creative people talking and thinking aloud. I was a bit of a fly on the wall. I weighed in when they asked me to, but it’s such a different world for me.
Exchange: I’m fascinated by this; it’s been the topic of many late-night discussions. So, tell me specifically a little bit about what you did for Avengers.
Kahn: Let me see if I can reconstruct this a little bit. I met with them in London, because I happened to be there and they were doing some other film. It’s a bit different from the Dan Brown example I gave you. For that movie they actually sent me a copy of the script to read and review and then give them feedback. That was one end of the spectrum. For Avengers they did not do that. They met with me and then they told me what the plot line was that they were trying to work through and they wanted help, or really confirmation, that they were on the right path and it was not sort of laughable from the perspectives of the expertise that I have. So they were not asking me science questions; they were asking “Does this ring true?” from the perspective of somebody who thinks about genetics and ethical issues. So we had a back-and-forth conversation about what the plot line was. I asked them what stage of the process they were in. They said it was still a work in progress, but they were trying to figure out some of the details. They were thinking about what might work based on the conversation we were having, so it was kind of an early formative stage conversation. In the case of Avengers, they wanted to focus on the genetically modified life form with me and I actually do not know much about the plot. But because I have not seen the movie yet, I do not know how they incorporated that genetics component, that scientific modification of life stuff into the story.
Exchange: Well, there are two characters, and some of this information is out in the comic books, so I’m not spoiling anything crazy, but you have the character of Ultron, who is AI. He is a character that takes a lot of his world view and philosophy from the person that made him, his “father,” and he decides to save the world again, like you were saying with the other movie, he thinks that to save the world, humans should not be around, and that he can make the world a perfect place if there’s no human emotion. And then, there’s another character called The Vision, who is sort of a combination of a machine and a biological creature. So I guess the big question would be, what are your thoughts on AI?
Kahn: Well, there is a whole bunch of aspects on that. Let me piece things together because this conversation happened more than a year ago, and it was in a coffee shop in London. I can distinctly picture where we were, but I was not allowed to take notes, and I was not prepared. I mean, they did not prepare me in advance. It was, “This is the conversation we’re going to have, you’re not going to write anything down, you’re not reporting about this. You’re just responding to what we tell you.” So I remember having a conversation with them about autonomous machines and what I remember is talking a lot about was autonomous automobiles and that philosophers have been engaged by Google to help them think about a very particular hypothetical problem called “The Trolley Problem.” Do you know that one?
Exchange: No. That one I do not.
Kahn: Okay, this is one that you can look up online, and there are lots of versions of it, as well as lectures and TED Talks about it. The problem is a trolley is going down the tracks, the driver realizes there are two workmen on the tracks, and if he continues along the tracks he cannot stop the trolley in time and it will kill the workmen. In the hypothetical, he can turn on a spur and only hit one person standing on it. And if he pulls the lever on the spur, it will kill one. If he allows it to keep going straight, it will kill two. So what should the driver do? It is the typical way people of undue moral philosophy try to introduce things like utilitarianism versus duty-based ways of thinking about morality. But in the case of an autonomous car, where there’s no driver, how do you decide what to program the car to do in that case? There’s going to be opportunity when the car is trundling along on its own and there is a pedestrian in the walkway and a dog in its way if it turns. Will the car continue on its path and hit the pedestrian or will it turn and hit the dog? Will it go straight and kill two pedestrians or turn and kill one? That has to be programmed into the machine. So part of the conversation we had was how do you capture the notion of a very difficult ethical decision in a non-human, an artificial intelligence? And at what point do we say that it will learn on its own and whatever it decides is no worse than what we as humans would decide? And if it will be able to think on its own, will that be better or worse or the same?
So we talked a bit about that, I remember. I do not remember what else we might have talked about regarding artificial intelligence, and frankly it’s not an area that I do a ton of work in. So we were having a general discussion, but they seemed to think that was a helpful way to maneuver through the plot.
Exchange: That’s really fascinating. And I know you do a lot with genetic tech. I’d love to hear a little bit about that. I don’t remember thinking about advances in technology when I was a kid in terms of “should we do that,” but I’m sure it was happening then. And even now, when products are developed, it can be part of a “this will help so many people, but it can also hurt so many others.” Can you tell me a little about that?
Kahn: That’s a big part about how we think of genetics and how we think of technology. So, everything from—I don’t know if you listen to public radio.
Exchange: I do.
Kahn: There was a story in the news today about a Chinese laboratory doing genetic modification on human embryos. Did you happen to hear that story?
Exchange: I have not yet.
Kahn: Well, that is the news of the moment in my world right now. It has been a bright line of prohibition in the United States and everywhere else where there are rules about scientific research that we cannot do things that will affect the eggs and sperm. You can only alter people in ways that will modify them, but not their future offspring. The worry being that we don’t know how good or bad our skills are just yet and you could introduce new problems that would be inheritable and treat one problem that would cause another, right? The risk of problems is just too great, and we cannot unleash that level of risk on to the next generation. It is one thing for an individual to say that they are willing to take that risk on to their own body, but it is a different thing when you are talking about something that can affect an infinite numbers of future generations. And then there is a fear of when you go too far, and what you’re doing is just wrong because you’re manipulating what it is to be a human. One of the hosts compared it to how people think of genetically modified crops; that it just does not feel right to be consuming something that is not how Mother Nature intended it. It always makes me cringe to hear that articulated, but there is a point we should not pass. While it is impossible to miss GMO food, I think it is off to talk about genetic modification for humans as though it is comparable. Ever since we first starting having the ability to modify DNA in the 1970s, whether it was animals or plants, this conversation of “how far is too far” or “how far can we go” has been going on. So, it is not that I think about new versus old, but this is a conversation that has been going on for decades. And I don’t know. I mean, I’m not a historian, but it seems to me that whenever a new technology came along, there was a similar gnashing of teeth that came along with it. I think with the implications of the technologies that become available, the conversations get more refined, but it still feels to me like the same conversations we had about genetics when they first became available.
Exchange: Once a technology is out there, even though we are headed toward something, do you think it is inevitable that it is going to happen? I was thinking in terms of AI, but also in terms of cloning. Good or bad, once it is out there, is it inevitable that it is going to happen?
Kahn: I think it is more inevitable now than it used to be. There was a time when just the technical ability and the infrastructure to do the kinds of things you are talking about was limited to very highly developed countries where there were strong protection regimes in place. So we have had rules in the United States about not doing genetic modification on germ life. About not cloning human beings, though frankly that is not really illegal in the United States at the federal level. And there are limits about what kinds of technology can be brought to there and for what particular purposes. But that is not the case anymore because the technological curve needed is much, much less. You can now more or less do synthetic biology in your basement. High school chemistry students can do it. So, you can put together combinations of life forms that just do not exist in nature from material you can buy on the Internet. It is really hard to control technology when the tools needed for it are widely available and do not require a lot of expertise to actually perform. The reason the story of this lab in China genetically manipulating human embryos is so important is because it is the first time anybody in the world has published it and admitted that they have done it. And I think the Chinese are wanting to show that they have the ability to do something first, in fact, a subtext of the story that was really interesting to me was the report that was published, and it came out yesterday, was rejected by both Science and Nature, which are leading science journals in the world, and rejected on ethics grounds. So not that it was badly done, but that it was ethically unacceptable that they had done it. So instead it was published in a magazine called Protein and Cell, which you’ve probably never heard of. As a whole, the means we had for control are diminishing, if they still exist at all.
Avengers: Age of Ultron hit theaters on May 1.