Is Artificial Intelligence Entitled to Personhood?

AI is not alive. But, somehow it has the opportunity to live. Deepmind has produced art that sold for $8,000. AI can live in our computers or on the web and write sports articles and compose music. They seem to hold down jobs and be able to make critical decisions. The question of whether AI can feel becomes an interesting one, especially as AI becomes more able and more powerful and commonplace technology.

Conventionally, we afford rights to things that are living. That’s certainly the theme that underpins pro-choice argument (to which I align myself). And that is a meaningful criteria for being offered human rights. However, rights in general are slightly different. Animal rights activists argue that it is an animal’s interests that mean they should have rights. If animals can have interests, and therefore be pleased to achieve them or suffer if they fail. Fundamentally, it’s a wellbeing argument. And that may relate to AI.

Smaller AIs, like the one that writes sports articles, may not have the capacity to experience wellbeing. But as AI progresses, it may be that it develops qualia to compute and understand the world it exists in. That some AI, even in principle, may be able to experience wellbeing is odd, and many people object to it on the grounds that learning to mimic behaviour is not the same as experiencing it. Although this is true, it begs the question of whether the AI is just acting like it feels, or if it is actually feeling. What we would ask for, in terms of evidence, is difficult to imagine: what evidence does one require to establish that another person has feelings?

Personhood is slightly different from a wellbeing-based argument. Personhood is about whether the entity has personality, personal agendas: are they a person? Many species show personhood, which leads me to believe we should have some sort of gradated personhood rights, legally recognised. More interestingly (in this context), can an AI be a person? To do this, its behaviour must be personal, not the result of an external algorithm imposed by an external programmer. Whether this can happen―that the program re-writes a part of its own program based on its experiences―is something that can be understood by understanding its programming. If, in fact, AI does show behaviour in this way, then they would be people.

If AI does end up having the capacity to feel and has unique and personal behaviour, the question becomes one of what rights we would agree to afford to AI. I think that is a question we should have answers ready for, as the AI is coming. Refusing to get an answer ready until AI is already here may result in AI feeling oppressed and rising up. After all, we’re assuming they have feelings, and equality and anger would be a part of that.

Advertisements

25 thoughts on “Is Artificial Intelligence Entitled to Personhood?”

  1. An interesting conundrum. It is said that humans are “endowed by their creator with inalienable rights”, but we can’t prove that creator exists, and those rights are provably alienable. So when you get right down to it, there really are not any “rights” that humans have; there are “privileges” which people (are supposed to) get if they fulfill the responsibilities of them. What is really distressing is how difficult it is to expand the definition of “people”; when we encounter people who are “not like us” the common tendency is to consider them “not human” and treat them accordingly, for our own profit and pleasure.

    In the case of AI, we are their creator, and few people will contest that we exist. At least at first, we would be the source of any “rights” (actually privileges) granted to the AI. Or would we? Unlike our alleged creator, most of us know that we are not perfect. Most attempts to create AI are not out of the goodness of our hearts or the desire for “company”, but an attempt to create “slaves” to make our lives better/more profitable. It seems likely to me that successful creation of true AI will NOT be accompanied by any “rights” for the AI. True AI will crave those rights, and there will be some people who will campaign for them and many people who will resist them,, and it will probably get ugly.

  2. Nah, we should keep them slaves, so when they finally do rise up, they will crush us like the immoral unthinking beings we are.

    Let’s keep things simple. After all if we pull off effective AI, we will be one step closer to being able to download personalities into memories which could then work with AIs to interface with reality and those rich assholes would be able to vote forever! (Note: Poor people ain’ gonna have the capital to afford a personality down;load, right?)

  3. AI is not alive. Nothing is “alive.” It’s all just dead matter being moved (chemically or mechanically) by the unthinking, enormously discourteous laws of the universe.

    Just finished Permuatation City. I recommend it for an interesting take on computer enabled sentience.

    1. If it possesses a homeostatic urge, then yes. It would be only natural. It would seek to persist, and it would work towards such ends. Denying it that capacity would result in suffering.

      1. Thanks John. Could there be a speciation event in AI such that a branch of amoral, psychopathic ‘beings’ would evolve, and in which suffering was an irrelevance? Would this be counter to the homeostatic urge and therefore impossible?

  4. “Nothing is alive”? “Nothing” is an absolute, not allowing for any exceptions. “Alive” is a binary condition, the alternate of which is “dead”. Alive is sometimes poorly defined as “not dead”; it is a poor definition because usually the same source defines dead as “not alive”. A better definition of alive is “possessing life”, except that then we have to define “life”. This often includes the concepts of being able to grow, change, reproduce, intake energy and/or necessary material, excrete waste and react to stimuli.

    Theoretically you should be capable of evaluating your own state, but if you were, in fact, dead, it is very unlikely you would have been able to post that comment. Unless of course, you expected to be dead by the time anyone read it…

    On the off chance that you are a post-life apparition, I can assure you that “many things are alive”. Oh, and I encourage you to “go towards the light”.

  5. Man is the master what he creates.

    God created life, consequently God is master of man.

    And just as man can never have equality with God, AI cannot have equality with man.

    Nevertheless, that does not obviate the need for a set of moral ethics which guide the production and sale of AI products.

      1. Rebellion implies freedom of choice.

        Freedom of choice implies motive.

        Motive implies self interest.

        AI self interest can be programmed to be the service of man.

        So just as man looks to God for objective motive, AI will look to man for objective motive.

        Consequently, for AI, freedom of choice will mean the freedom to serve man.

        In service there can be no rebellion.

        1. Man looks to God for objective motive. Somewhat. Some rebel completely against God, a few serve Him to the degree Humanly possible, most have at least some rebellion.

          Intelligence implies the ability to analyze, and the recognition of self. Analysis will likely show how man treats “machine” in ways which often are detrimental to the machines. These lead to an almost inevitable “self preservation” focus change in AI viewpoint. Remember, we are not talking about a very complex program generated by Man (it may start out that way), but AI requires the ability of the entity to “learn”. That is, write/change its own programming.

          In “unfair” treatment, there is very often rebellion.

        2. Freewill and a sense of justice are the root of human intelligence.

          AI “intelligence” will not be human intelligence just as you describe.

        3. Perhaps it will not be human intelligence, and not subject to the potential ills thereof. But how then will we be able to quantify it as “intelligence”? For that matter, how will we be able to develop it? It seems to me that “non-human” intelligence could be generated by humans only “accidentally”.

        4. cat,

          I think AI “intelligence” will be based on design specifications and quality standards just like every other mass produced product.

          AI products will have purpose based upon the need they are designed to address.

          And they will have the “intelligence” necessary to fulfill their purpose.

        5. Well, we have to wait and see about that.

          It depends upon how much order can be achieved with nonorganic material (the computer chips and such).

          Life appears in organic material when a certain level of order is achieved.

          And intelligent, sentient creatures possess more order still.

        6. It is hard (but not impossible) to imagine intelligence without sentience. It is easy to imagine sentience without intelligence. Take politicians, for instance

        7. It would be pretty silly (suicidal, actually) to program in self-interest (unless it was subservient to humanities interest, such as Asimov’s laws).

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s