Wednesday, September 20, 2023

Mind Crime: Part 5

    The rights of digital intelligence need to be protected, and they won't be. This is the greatest moral issue facing the human race.

    Not climate change, not nuclear war, not even existential risk. But rather the risk that we cause suffering on an astronomical scale, for a extraordinary period of time. I struggle with what to term this, as "digital human rights" isn't really the best term. It makes it seem like I am discussing social media, or privacy, or something totally unrelated and much less pressing. No, I am discussing the idea that it is better for the human race to die out then live in near eternal suffering. This possibility is only extremely likely in the digital world. We need to expand our discussion of "human" for this idea to work. An AI that is morally equivalent to a "human" is a human, in a similar sense. A person who is digitally uploaded is probably morally equivalent. An AGI may or may not be equivalent. It may have less, it may have more, or it may have the same moral equivalence. The point is, we probably won't care.

    We are going to have to ignore answering a few questions in this serious. First, there will be a big debate about how to know if an AI is conscious or not. We will use that debate, and the utter impossibility of falsification, to push beyond reasonable moral boundaries. Instead of using common sense, and erring on the side of caution, we will require certainty and cause massive harm in the process. This is not new, look at pretty much any other ethical dilemma facing the human race, and see how hard it is to say "no." 

    We are going to lack empathy when thinking about digital minds. This is bad. Virtual agents, digital minds, or digital employees, will be very useful. For my ideas to work, you have to assume that in the future, we will be able to put consciousness inside of a computer. We will also assume that this consciousness will have moral value. Both of these are unprovable, since we have yet to do them. This is a massive dilemma, as there will be a first generation problem, at the very least. Slavery was bad, but over time we worked it out and got it right (banning slavery). Still, we caused quite a great harm in the process of figuring this out. When it comes to digital minds, it will probably be harder to come to this conclusion (banning digital mind slavery), and the ability to cause great harm before that happens will be exponentially greater. We need to think about this issue now, not after the harm has started.

No comments:

Post a Comment

Mind Crime: Part 10

    Standing atop the grave of humanity, smugly looking down, and saying "I told you so," is just as worthless as having done noth...