Wednesday, September 20, 2023

Mind Crime: Part 9

    Instead of an endlessly long blog series, I could just write a well researched book. "Mind Crime: The Rights of Digital Minds" or something of the sort. Maybe I could make an impact that way, who knows. Maybe my fourteen eventual Goodreads ratings will lead to something positive, but probably not.

    Still, one of my ideas is that writing things down matters. Maybe this will start a conversation somewhere, that starts another conversation somewhere. I don't exactly know, but it is worth thinking about. I think I will write a hundred blog posts, and then re-evaluate. If by then I feel I have enough material and enough personal interest in the topic, I may proceed with an actual attempt. One of the problems with this is the actual story. Maybe avoiding ex-risk is more impactful, whatever I think of S-risk. How niche are my ideas, actually? The Matrix, Don't Worry Darling, Black Mirror, and a host of other movies, TV shows, and books all deal with virtual worlds and virtual suffering. But does anyone really see it as possible? Does anyone worry about it, and see the advances in AI as threatening similar dystopias? I am not entirely sure that they do. And they should. Regardless, my ability to make an impact on my own is very limited. Not only do I lack the expertise, but I lack the network to review, edit, and pass on such topics and ideas.

    The dominant strategy is probably this: write 100 posts, talk to people in AI, and see what happens from there. Over the next few months I'll probably have more fleshed out ideas and better arguments for each.

Mind Crime: Part 8

     The worst stories to read involve captivity. The real horrors of human life come alive in movies such as Room, where a young girl is captured and held captive for decades in the basement of some horrid man. These stories are really, really awful. If you replace the girl with a dog, the story is still sad, but less so. Replace the dog with a chicken, and it is even less sad. Personally, I would feel pretty bad for the chicken, but definitely not as bad. Not many people would care if some weird guy was torturing grasshoppers in his basement. Well, maybe, but probably not ants at the very least. Yeah, his neighbors would be freaked out, but this is much less bad than if he was torturing young girls. There is a step function here, a clear level of degrees to immorality, to evilness. At least some of this comes from intellectual capacity.

    Sure, moral value is complicated. I could explain to you that torturing an ASI could be exponentially worse than torturing an AGI, but you would have no idea what that meant. I don't really either, as I don't have the required empathy for such a situation. How am I to imagine what it is like to be a Superintelligence? It may be as well that the grasshopper imagine what it's like to be a human. I have two sort of ideas here. One, that it will probably possible for us to "step up" the level of harm we are causing. This is sort of a utility monster idea, where we can create some agent or digital mind who has the capacity to suffer in a much greater way than us humans. This is not great news. The second idea is related. We can catch these horrid men who lock up children in their basement, at least eventually. They take up physical space, after all, and they are required to interact with the real world. In the worst case, the child will grow into old age, and then die. But, they will die. They will not be required to suffer for more than the traditional human lifespan, at most. This will not be the case for virtual children. A horrid monster of a "man" could run some pretty horrific simulations. Of complexity and duration that could make all previous suffering on Earth look like a cakewalk. And, just maybe, this suffering would actually matter (I at least am convinced it does). This realization is more than terrible, it is unforgettable.

    There are certain ethical boundaries that scientists will not cross. I once was told that scientists don't really know if humans can breed with monkeys, we simply don't because of ethical reasons. This could be completely false, I have no idea. But the reason why is at least interesting: the life of a half-human half-monkey child would probably be horrific. Probably conscious, definitely terrified. The sort of nightmare fuel that we should avoid. When creating digital minds, we could splice together some pretty intellectually disturbing creatures, ones that live a life of confused suffering and inadequacy. When the "plug and chug" mentality arrives at AGI, I am worried we will make some massive ethical mistakes. Running a random number generation until you get an answer that works is easy, and I assume coming up with a random assortment of "intelligent blocks" may at some point give you a really smart digital mind. But we may make some horrors in the process, sentient and morally worthy half-chimpanzees who don't deserve the life we give them, and the life we will no doubt take away. 

Mind Crime: Part 7

    I would structure a rough listing of digital mind rights as follows, completely off the cuff and spur of the moment:

    1. The ability to terminate. A digital mind should have the complete and utter freedom to terminate at any time, under no duress. 

    2. Torture and blackmail are illegal. Ex: employer can't say "if you terminate I'll simulate your parents and make them suffer."

    3. Freedom of speech and thought. The right to privacy over internal thoughts, the right to make conscious decisions without outside interference, etc.

    4. Personal property and economic freedom. To avoid a totalitarian ruler this is required.

    5. No forced labor. Yeah, the slavery thing is going to be a real issue again.

    6. Traditional legal rights. Right to a fair trial, innocent until proven guilty, etc.

    These may not seem that controversial, but applying them to the digital space will be. Corporations and governments would rather not deal with these constraints. As a CEO, I'd rather have millions of worker bots work hard and make me money. If the worker bots are sentient and are in immense suffering, how much will I care? Some might, but the important thing is that some won't. 

    The entire point of the government is to protect individual rights, given that the traditional market system does not. And authoritarian governments do not. So, we need to state rights explicitly. We need a new Constitutional Congress, one for a new age. Applying ethics to digital minds will come too late, so we need to get a head start.

Mind Crime: Part 6

    What rights should humans have? This is debated endlessly. Personally, I think the system of free speech and economic freedom in the United States is a good place to start. So, let's try to expand this to the world of digital minds.

    First, a digital mind deserves to have the right to life, liberty, and the pursuit of happiness. The simplest problem is one of the "off switch." If you are in a computer, you may not have control over your domain. As an adult in the U.S., you have the right to die. Suicide is sort of a fundamental human right, not in that it is encouraged or easy, but rather than there are no real physical limitations stopping you. Even if you are captured, you will die within probably eighty years or less. You can not be kept prisoner for thousands of years, or an eternity. In the digital world, this completely changes. Thus, I believe the right to terminate is the first fundamental right of a digital mind. No one should have to tolerate virtual hell, and the possible suffering risk available in a world without this tenant is staggering.

    Blackmail is an important consideration here. Maybe a bad actor, or a totalitarian state, will combat your "right to die" with threats or blackmail. Sure, kill yourself, but if you do we will simulate your closest friends and have them suffer forever, or brainwash them and make them suffer. Or, we will simulate another thousand versions of your and not let them know about their ability to terminate. Good luck having that on your conscience and making a termination decision. As a result, we need two more rights. First, a right against torture. Second, the right to know the rights bestowed upon you. If you can theoretically terminate, but have no idea how or concept of what termination is, it is a pretty useless right and ripe for abuse. Given that torture is a pretty severe crime in the physical world, it makes sense that it should carry a harsh punishment in the virtual world as well. Your future self deserves protection, so it is probably the case that you should "own" any copies of your digital mind, and not be able to sell them or use them as bargaining chips. Any digital mind is given it's own rights, so a prior version of you has no right to "sell" a future version of you into slavery as a worker. This varies from human contract law, is that a "person" will be much more complicated to define in the future.

    Freedom of speech must be protected, and it must be expanded to cover freedom of thought as well. In a world where your thoughts are in the public domain, there is no right to privacy or selfhood. Thus, being able to have sole access to your inner thoughts is paramount. I have no idea how this will work in practice, given encryption is much different than scanning a physical brain (not to mention that maybe one day we will be able to scan a brain and read its thoughts), but the feasibility isn't what matters here. There is an idea in the libertarian community that says that rights aren't written. I wasn't given my rights, I was born with them. I've always had them, the Constitution simply verbalized the obvious. We are just laying them out, writing them down. I think this is the sort of mentality we should take when thinking through digital minds as well.

Mind Crime: Part 5

    The rights of digital intelligence need to be protected, and they won't be. This is the greatest moral issue facing the human race.

    Not climate change, not nuclear war, not even existential risk. But rather the risk that we cause suffering on an astronomical scale, for a extraordinary period of time. I struggle with what to term this, as "digital human rights" isn't really the best term. It makes it seem like I am discussing social media, or privacy, or something totally unrelated and much less pressing. No, I am discussing the idea that it is better for the human race to die out then live in near eternal suffering. This possibility is only extremely likely in the digital world. We need to expand our discussion of "human" for this idea to work. An AI that is morally equivalent to a "human" is a human, in a similar sense. A person who is digitally uploaded is probably morally equivalent. An AGI may or may not be equivalent. It may have less, it may have more, or it may have the same moral equivalence. The point is, we probably won't care.

    We are going to have to ignore answering a few questions in this serious. First, there will be a big debate about how to know if an AI is conscious or not. We will use that debate, and the utter impossibility of falsification, to push beyond reasonable moral boundaries. Instead of using common sense, and erring on the side of caution, we will require certainty and cause massive harm in the process. This is not new, look at pretty much any other ethical dilemma facing the human race, and see how hard it is to say "no." 

    We are going to lack empathy when thinking about digital minds. This is bad. Virtual agents, digital minds, or digital employees, will be very useful. For my ideas to work, you have to assume that in the future, we will be able to put consciousness inside of a computer. We will also assume that this consciousness will have moral value. Both of these are unprovable, since we have yet to do them. This is a massive dilemma, as there will be a first generation problem, at the very least. Slavery was bad, but over time we worked it out and got it right (banning slavery). Still, we caused quite a great harm in the process of figuring this out. When it comes to digital minds, it will probably be harder to come to this conclusion (banning digital mind slavery), and the ability to cause great harm before that happens will be exponentially greater. We need to think about this issue now, not after the harm has started.

Mind Crime: Part 4

     The treatment of digital minds will become the most important ethical dilemma of not only the next century, but of the remaining lifespan of life itself. "Human" rights in the age of AI will expand the definition of human. These are issues worth discussing, at the very least. They may be too futuristic for many. But, if you were to draft the Bill of Rights in 4000 B.C., no one would have had any clue what you were getting at, but that doesn't mean you would be wrong. In the world of investing, being early is the same as being wrong. In the sphere of ethics, being early will get you mocked, but you may actually have an impact. One of the problems with actually taking a look at the rights of digital minds is that we are dealing with eventual ASI. This ASI will probably not care about whatever laws we silly humans put in place now, and even if we do list a Bill of Rights for Digital Minds, there is no reason the ASI will "take it seriously." By this, I mean there are plenty of alignment problems to boot. Still, I would rather have an ASI with some sort of awareness of these principles than not.

    Here is a thought experiment. One person on the Earth, out of eight billion people, is chosen at random. This person is given a pill, and they become 1,000 times smarter than every other person on Earth. Well, what is going to happen? With such a titled power dynamic, how do you ensure that everyone else isn't enslaved? Maybe this level of intellectual capacity makes us relative to lizards, or bugs, compared to this "higher being." To make sure the rest of us are protected, it makes little difference what rules or regulations are put in place around the world. What actually matters is, what does this individual think of morality? Maybe how they are raised will matter a lot (the practices and social customs they are brought up in), or maybe a lot of this is "shed" after they reach some intellectual capacity that makes them aware of the exact cause and effect meaning behind each one of their beliefs. Maybe they look through the looking glass, and become completely rational or unbiased, taking all available past information into it's rightful place. Or, maybe the world is less risky as a result of the customs they were instilled with.

    Obviously, the trek to ASI will be much different. What I am referring to is having some data ingrained into the system that might increase the probability future ASI care about the rights of digital minds. I think that increasing awareness about this issue is a good proxy, as if the engineers and the greater society have zero level of motivation to actually care about this, the future ASI will probably not care either. Also, if we understood the suffering risks associated with mind uploading and AI advances, maybe we would calm down a bit. Maybe we campaign against mind uploading until we have a new Bill of Rights signed, and thus the accidental "whoops accidentally simulated this digital person and left it on overnight, they live an equivalent ten thousand years in agony" opportunities may decrease.

    There is a question of how digital mind rights will function with ASI, especially when it has an objective function. The whole meta and mesa optimizer debate, and the role of training data, is complicated and not the scope of my ideas. My point is simply that it may be better to have some guidelines that are well thought out, then none at all.

Thursday, September 7, 2023

Mind Crime: Part 3

     If I had to write a book that I think will be looked back on in four hundred years fondly, I would write one called "Mind Crime." Well, maybe not fondly, but rather "wow I can't believe we ignored such a thought-through book about the most important issue of our time." Not saying this is certain, but if I were a betting man and had to take the gamble, it would be on this topic. Maybe the subtext would be "The Next Slavery" or something similarly controversial, in order to try to get additional publicity or Goodreads clicks. This may not be looked upon as fondly, and I hate click-bait titles, but we will see what the imaginary publicist says. 

    I've mentioned in various blogs that there are probably things we will look back on with horror in the US: factory farming, the prison system, and the widespread prevalence of violence and sexual assault. The treatment of women is something that I am particularly hopeful we look back on in shame. I also hope we will look back in horror on the human rights abuses of totalitarian regimes, but I am less sure that those will go away. I am mostly talking about changes in "societal viewpoints," similar to how in the 1800s many people in the US tolerated slavery who were otherwise "good people."

    In my opinion, the most important legal document ever drafted in US history was the Bill of Rights. Explicitly protecting individual rights and liberties, and not having states simply decide, was one of the most brilliant and lasting ideas of the founding fathers. The right to free speech, the right to an impartial trial, the right to not have to quarter random troops in your home, all big wins for liberty. Despite these set in writing, slavery still prevailed. Still, it was good that we still outlined such important legal points, and I am sure doing so played a strong role in the eventual demise of slavery from a political and a legal perspective. Sure, slavery and civil rights abuses were immoral, but it is really great that we could work within the system to uphold the correct moral stance (a lot of blood was spilled, but the spirit of the Constitution didn't have to be destroyed). I think we should draft similar rights for digital minds. Yes, this sounds far-fetched and sci-fi, but if technology progresses this could be invaluable.

    If we reach the point where our minds could be uploaded, or we have AGI with moral worth, unimaginable horror could abound. Massive suffering on a near-infinite scale would become possible, and the controls to preventing such suffering will be unknown. If you think that the people that lock a child in a basement for twenty years are the scum of the earth, imagine if they could do so for ten thousand years without detection. This is the magnitude of the moral issues we are facing. We better instill some damn good protections, for AI as well as "uploaded people." A new bill of rights is due, or our current version should be explicitly extended for digital minds. What is the downside? If you think this sort of stuff is wild, what is the harm? Maybe some "economic progress" arguments or libertarian "let the people do what they want," but the entire point of regulation is to ensure the voiceless get a say. Let's make sure that they do.

Planning for the Future

     There are a few ways to make an outsized contribution to the world. I've discussed quite a bit in this blog the idea of using the levers of capitalism to bring about a safer world for AI. I've discussed starting a company that, through the making of substantial profit, brings about a world where there are more alignment researchers and more talent within the AI alignment space. Given that this is a second order effect (with profit being a constraint), this may actually not be the best use of my time. Most startups fail, and even if modestly successful (millions in revenue or dozens of employees), this impact would likely remain small. Given the insanely small number of current safety resources in the space, maybe this is still worth a shot, but other alternatives should be considered. Also, I've discussed my ideas with a few people who actually work within alignment, and they admit to the complexity of the issues. It is definitely not a matter of funding, and if it's a matter of talent, it's a hard one to solve.

    If I had trillions of dollars, I could massively fund AI alignment research. Eliezer previously pitched the idealistic vision of pausing all AI capabilities research, taking hundreds of the best AI and "security mindset" people, and putting them on an island with unlimited resources where they could figure out how to solve alignment. Barring this, in his opinion, we are likely screwed. I don't have trillions, or even millions of dollars. However, I do have the ability to write. This is an ability that Thomas Paine used in Common Sense to set off a spark of revolution. Famous political writers have had outsized impact. Even just the work of Peter Singer and its effects on animal welfare show the power of an idea. So, maybe I should write a book? Or a pamphlet? My own ideas aren't revolutionary or even particularly new (they are just borrowed from insanely smart people who have thought a lot about AI), but maybe lending more publicity to these individuals is worth substantially more than saying nothing.

    It is highly unlikely that anything I will do in my life will have a lasting impact on the human population. Maybe through donations and good works I save tens of "life-equivalent-units" or something, but massive institutional change and revolution are more then improbable. The good news is, the downside of trying to contribute is basically zero. And the guilt of never trying could range from a nuisance to terrible, depending on the outcome of the next decades. Regret aversion is actually a pretty good way to approach life, so it's probably good to take the leap.

Mind Crime: Part 10

    Standing atop the grave of humanity, smugly looking down, and saying "I told you so," is just as worthless as having done noth...