Friday, July 21, 2023

The World After AGI

     Let's assume that alignment works. Against all odds, we pull it off and we have human-level AGI in the hands of every man, woman, and child on the planet Earth. The type of AGI that you can run on your smartphone. Well, things are going to get really weird, really fast.

    Honestly, maybe the good years will all be pre-AGI. Maybe we should enjoy our uncomplicated lives while they last, because traditional life is coming to and end. From a governance standpoint, I have absolutely no idea how we will regulate any of these developments. Having an actually coherent supercomputer in my pocket, one that can do everything I can do except way faster and better, does more than just make me obsolete: it makes me dangerous. If AGI becomes cheap enough for me to run multiple copies, I now have an entire team, or an entire company. An entire terrorist cell, or an entire nonprofit organization. Really the only constraining resource is compute. With an AGI as fast as GPT-4, I could write books in the time it now takes me to write a page. Sure, AGI will probably start out very slow, but incremental increases would lead to a world with trillions of more minds than before.

    Not only is this a logistical nightmare for governments, but also it is a human rights nightmare for effective altruists. I have no idea how we will control for mind crime, and if the shift towards fast AGI is rapid we'll probably cause a whole lot of suffering. We'll also probably break pretty much every system currently set up. Well, fortunately or unfortunately, we likely won't actually solve alignment and won't have an AGI that is actually useful for our needs. We'll probably hit a similar level of rapid intelligence that breaks everything and maybe kills everyone, but we won't need to worry about drafting legislation that controls the use of our digitally equivalent humans. I guess that's the good news?

Wednesday, July 5, 2023

Computer Models With Moral Worth

     At what point does an optimization function have moral worth? If you break down the psyche of a bug, you could probably decode the bug's brain into a rough optimization function. Instinct can be approximated, and most living creatures operate mostly out of a desire for survival and reproduction. There is some randomness baked it, but the simpler the brain structure of an animal, the more it resembles that of a computer program. Some computer models are very complex. I would estimate that the complexity of an model such as GPT-4 is vastly greater than the complexity of some animals, and definitely more complex than a bug.

    Do bugs have moral value? This is a hotly debated topic in the effective altruism community. Personally, I don't really think so. If I found out that my neighbor was torturing fruit flies in his basement, I would think my neighbor was weird, but I probably wouldn't see him as evil. Scallops? No. Frogs? A bit worse for sure. Pigs? Cats? Dogs? Chimpanzees? Humans? Well, there is obviously a sliding scale of moral worth. Where do computer models fall on this spectrum? Right now, the vast majority are probably morally worthless. Will this remain the case forever? I highly doubt it. We really have no idea when these thresholds will be crossed. When is a large language model morally equivalent to a frog, and when is it morally equivalent to a cat. Obviously, if we think cats have moral worth even though they are not sentient, we should care if computer models are treated with respect even if they are not human level. I foresee this being an extremely important moral conversation for the next century. Unfortunately, we will almost certainly have it too late.


    Flowers for Algernon is one of my favorite books of all time. The plot is simple: a mentally retarded man is given a drug that makes him smarter, until he becomes a genius. This storyline is repeated in a few other forms of media, probably most famously in the movie Limitless, a film about another man given a pill that makes him smarter. In both of these stories, the main character instantly becomes superior to other humans. We read these stories, and realize instantly that the smartest person in the world could probably be the most powerful. After a certain number of standard deviations upward, it is pretty obvious that such an individual could exercise an extremely large amount of control on the world. In a 1991 short story by Ted Chiang, titled "Understand," superintelligence is shown in an even more convincing fashion. The main character in the story exhibits the highest level of intelligence, and he determines that the only path towards further intelligence would require his mind being uploaded into a computer.

    Let's clarify a few things. One: our minds are basically pink mush. We evolved randomly from the swamp, and due to the anthropic principle (observation selection effect) we can sit around and think about our lives abstractly. Two: there is clearly an upper limit on the computations that a physical substrate such as the human brain can handle. Our minds were not designed for intelligence outright, and they are made out of mush. Three: computers probably don't have these limitations. We haven't found anything particularly special about the human brain, and given enough time we can probably replicate something similar in a computer. Brains don't act like anything super weird (quantum computers), and our progress towards AGI doesn't show signs of slowing. Despite all of this, many people still discount the power that a superintelligent being will have over humanity. Maybe we should make books like those mentioned above required reading. Then, maybe humanity will begin to Understand!

Mind Crime: Part 10

    Standing atop the grave of humanity, smugly looking down, and saying "I told you so," is just as worthless as having done noth...