Wednesday, September 20, 2023

Mind Crime: Part 4

     The treatment of digital minds will become the most important ethical dilemma of not only the next century, but of the remaining lifespan of life itself. "Human" rights in the age of AI will expand the definition of human. These are issues worth discussing, at the very least. They may be too futuristic for many. But, if you were to draft the Bill of Rights in 4000 B.C., no one would have had any clue what you were getting at, but that doesn't mean you would be wrong. In the world of investing, being early is the same as being wrong. In the sphere of ethics, being early will get you mocked, but you may actually have an impact. One of the problems with actually taking a look at the rights of digital minds is that we are dealing with eventual ASI. This ASI will probably not care about whatever laws we silly humans put in place now, and even if we do list a Bill of Rights for Digital Minds, there is no reason the ASI will "take it seriously." By this, I mean there are plenty of alignment problems to boot. Still, I would rather have an ASI with some sort of awareness of these principles than not.

    Here is a thought experiment. One person on the Earth, out of eight billion people, is chosen at random. This person is given a pill, and they become 1,000 times smarter than every other person on Earth. Well, what is going to happen? With such a titled power dynamic, how do you ensure that everyone else isn't enslaved? Maybe this level of intellectual capacity makes us relative to lizards, or bugs, compared to this "higher being." To make sure the rest of us are protected, it makes little difference what rules or regulations are put in place around the world. What actually matters is, what does this individual think of morality? Maybe how they are raised will matter a lot (the practices and social customs they are brought up in), or maybe a lot of this is "shed" after they reach some intellectual capacity that makes them aware of the exact cause and effect meaning behind each one of their beliefs. Maybe they look through the looking glass, and become completely rational or unbiased, taking all available past information into it's rightful place. Or, maybe the world is less risky as a result of the customs they were instilled with.

    Obviously, the trek to ASI will be much different. What I am referring to is having some data ingrained into the system that might increase the probability future ASI care about the rights of digital minds. I think that increasing awareness about this issue is a good proxy, as if the engineers and the greater society have zero level of motivation to actually care about this, the future ASI will probably not care either. Also, if we understood the suffering risks associated with mind uploading and AI advances, maybe we would calm down a bit. Maybe we campaign against mind uploading until we have a new Bill of Rights signed, and thus the accidental "whoops accidentally simulated this digital person and left it on overnight, they live an equivalent ten thousand years in agony" opportunities may decrease.

    There is a question of how digital mind rights will function with ASI, especially when it has an objective function. The whole meta and mesa optimizer debate, and the role of training data, is complicated and not the scope of my ideas. My point is simply that it may be better to have some guidelines that are well thought out, then none at all.

No comments:

Post a Comment

Mind Crime: Part 10

    Standing atop the grave of humanity, smugly looking down, and saying "I told you so," is just as worthless as having done noth...