AI is not alive. But, somehow it has the opportunity to live. Deepmind has produced art that sold for $8,000. AI can live in our computers or on the web and write sports articles and compose music. They seem to hold down jobs and be able to make critical decisions. The question of whether AI can feel becomes an interesting one, especially as AI becomes more able and more powerful and commonplace technology.
Conventionally, we afford rights to things that are living. That’s certainly the theme that underpins pro-choice argument (to which I align myself). And that is a meaningful criteria for being offered human rights. However, rights in general are slightly different. Animal rights activists argue that it is an animal’s interests that mean they should have rights. If animals can have interests, and therefore be pleased to achieve them or suffer if they fail. Fundamentally, it’s a wellbeing argument. And that may relate to AI.
Smaller AIs, like the one that writes sports articles, may not have the capacity to experience wellbeing. But as AI progresses, it may be that it develops qualia to compute and understand the world it exists in. That some AI, even in principle, may be able to experience wellbeing is odd, and many people object to it on the grounds that learning to mimic behaviour is not the same as experiencing it. Although this is true, it begs the question of whether the AI is just acting like it feels, or if it is actually feeling. What we would ask for, in terms of evidence, is difficult to imagine: what evidence does one require to establish that another person has feelings?
Personhood is slightly different from a wellbeing-based argument. Personhood is about whether the entity has personality, personal agendas: are they a person? Many species show personhood, which leads me to believe we should have some sort of gradated personhood rights, legally recognised. More interestingly (in this context), can an AI be a person? To do this, its behaviour must be personal, not the result of an external algorithm imposed by an external programmer. Whether this can happen―that the program re-writes a part of its own program based on its experiences―is something that can be understood by understanding its programming. If, in fact, AI does show behaviour in this way, then they would be people.
If AI does end up having the capacity to feel and has unique and personal behaviour, the question becomes one of what rights we would agree to afford to AI. I think that is a question we should have answers ready for, as the AI is coming. Refusing to get an answer ready until AI is already here may result in AI feeling oppressed and rising up. After all, we’re assuming they have feelings, and equality and anger would be a part of that.