Friday, May 26, 2023

Utopia Now

    There is an argument for increasing the rate of AI progress. Maybe the probability of other ex-risks are too high, and we simply cannot wait around for another 100 years. If nuclear war was destined to happen within the next ten years, I am certain that we would be pushing as fast as possible towards AGI. In some sense, your drive to be reckless is highly correlated with your pessimism regarding where things are going. If you think humanity is on a great linear trajectory towards utopia, there is no use in throwing in random variables that can mess things up. If AGI has a 10% chance of killing us, and you are fairly certain that in two hundred years the human race will be flourishing, probably not worth developing AGI. If you are pessimistic about humanities prospects, maybe we take the 10% risk.

    The world is full of authoritarian governments that are doing terrible things. Two of the three military superpowers (Russia and China) have horrible human rights track records and have a strong drive towards increasing power and influence. Russia invaded a sovereign country recently, and China is doing very, very bad things (oppression of Uyghurs, Hong Kong takeover, general police state tendencies). The West is constantly on the brink of nuclear war with these countries, which would result in billions of deaths. Chemically engineered pandemics become both more likely and more dangerous over time. The barriers to creating such viruses are being knocked down and the world is becoming more and more interconnected. What are our odds? If our odds of dying off soon are great, or if it will take us a long, long time to reach a place where most humans are free and thriving, maybe we make the trade. Maybe we decide that we understand the risks, and push forward. Maybe we demand utopia, now.

    Well, there is another problem with AI: suffering risk. This is not often discussed, but there is a very real possibility that the development of transformative AI leaves the world in a much, much worse place than before (ex: ASI decides it wants to torture a bunch of physical people or simulate virtual hell for a bunch of digital minds for research purposes). Another factor in your AI hesitancy should your estimated probability of a perpetual dystopia. This is where I differ from other people. I believe that the risk of things going really, really wrong as a result of AI (worse than AI simply killing everyone) is massively understated. We should hold off on AGI as long as possible, until we have a better understanding of the likelihood of this risk.

No comments:

Post a Comment

Mind Crime: Part 10

    Standing atop the grave of humanity, smugly looking down, and saying "I told you so," is just as worthless as having done noth...