Monday, April 17, 2023

We Need to Speed Up, Not Slow Down

     At the moment, there is a lot of discussion about putting a pause on AI capabilities research. An open letter from the Future of Life institute has been signed by thousands of researchers, urging a 6 month pause on the training of models more intelligent than GPT-4. I would love for this to happen, as then society could take more time to absorb the impact of such a large technological shock. We will have more time to debate, discuss, and regulate. However, this is obviously an empty gesture. Someone with a tremendous ego and even more impressive lack of character will simply sign the letter and then immediately start their own AI lab focused on creating an AGI. His name is Elon Musk. China is not going to slow their progress, which means that the U.S. government has no incentive to either. If GPT-4 is a calculator than Bard is a bundle of sticks, so there is no shot that Google is going to really sit on the sidelines for six months. What people fail to realize is what I stated in a previous post: the first trillionaire will be someone who owns a very large stake in an AI development company.

    The financial incentive to build AGI is not only enormous, it is the highest financial incentive we have ever seen and possibly the highest financial incentive we will ever see again. This will be an arms race to the finish no matter what the talking heads say on the television or in congress. I vehemently disagree with the idea that we should spend our time campaigning to slow down capabilities research. It is simply not possible. The financial incentives are too massive, and anyone who would actually follow an order to halt progress, an idea that is completely unverifiable and ungovernable, is probably a more upstanding person who would thus leave the development in the arms of less ethical people. I understand that there is probably a "good guy with a gun" fallacy here, but I really don't see why we should trust anyone to act against their own self interest. Instead of this, we should be massively boosting alignment research. 

    Since there is a lot of overlap between alignment and capabilities research (an aligned system is actually more capable or will appear more trustworthy and given greater responsibility even if there are existential flaws), we should focus on long-term value alignment. I could not care less about solving interpretability or distributional shift. Someone else is either going to do this or not, and there is actually a massive financial incentive in each case. Also, if we knew why a neural net made every decision, I am not sure if that would be good or bad for humanity at this point. The question we should ask is: "where is there is not a massive financial incentive?" Some sort of long-term value alignment, sure. The kind of "shoot for the moon" research that will only be beneficial if we hit AGI and go "oh wow looks like superintelligence is pretty much imminent and we have no idea what we are doing." We should be spending trillions of dollars on this sort of research, not zero.

No comments:

Post a Comment

Mind Crime: Part 10

    Standing atop the grave of humanity, smugly looking down, and saying "I told you so," is just as worthless as having done noth...