Thursday, May 11, 2023

The World Will End Because Math is Hard

    Every machine learning book I read leaves me baffled. How on earth can anyone understand this stuff? Not at a surface level, but how can anyone really master statistics/probability/calculus/linear algebra/computer science/algorithms to a degree where they actually understand what all the words in these 1,000+ page books mean? Even a summary book such as the "The Hundred-Page Machine Learning Book" leaves me with more questions than answers. Now to learn all of that, and then try to layer on the required decision theory/economics/ethics/philosophy to a level where you can have a positive impact on AI alignment seems pretty unreasonable. A lot of people pick a side, either specializing in cutting edge deep learning frameworks or armchair philosophizing. The AI capability people tend to underestimate the required philosophical complexity of the problem, and the AI ethics people tend to completely misunderstand how current machine learning works. Maybe there are a few that can master all of the above subjects, but it is more likely that a combination of people with deep expertise in disjointed areas will provide better solutions. It is pretty clear that I will not be one of the individuals who invents a new, more efficient learning algorithm or discovers a niche mathematical error in a powerful AI product. Focusing on AI risk management, a massively underdeveloped industry, is probably the way forward for me. The math is simply too hard, maybe for everyone. But someone is writing the books. If we can get a few people who understand the complexity of the issue into the right positions, maybe we can cause some good outcomes.
    
    One of the benefits of focusing on risk management is that you can make money and not feel guilty about it. "Oh no, people working on AI safety are making too much money." Have you heard that before? I for sure haven't, and I would like to. To someone that believes in markets, that statement rings similar to "oh no, people are going to be massively incentivized to have a career in AI safety." What a problem that would be. Also, competition isn't even a bad thing, an arms race towards safer products would be quite interesting. "Oh no, China is catching up and making safer AI systems than the US." I would pay to hear that. Obviously, sometimes alignment is really capabilities in disguise. I have touched on this previously, but deciding what exactly makes systems safer and what makes systems more powerful is pretty hard.

    I briefly pitched Robert Miles a few weeks ago on some of my ideas. Mainly an AI risk management industry that will provide more profitable employment opportunities for alignment researchers. His response:

"I guess one problem is the biggest risk involves the end of humanity, and with it the end of the courts and any need to pay damages etc. So it only incentivizes things which also help with shorter term and smaller risks. But that's also good. I don't have much of a take, to be honest."

    I am a newbie to this field and Robert is the OG (someone who understands the entire stack). His take is entirely fair, as companies will only be incentivized to curb short term risks where they will be affected. The elephant in the room is obviously the end of humanity or worse. People that don't see this as feasible simply need to read "The Doomsday Machine" by Daniel Ellsberg. All this talk of nanotechnology makes us miss the obvious problem that we are a hair's breadth away from worldwide thermonuclear war at every moment. I wonder how things will change when a powerful, unaligned AI starts increasing its hold on such a world. Longtermists drastically undervalue the terror of events that kill 99% of people instead of 100%. In regards to long term AI alignment, I think the number of researchers will matter, and I hope people in the AI safety industry would be incentivized to study long term alignment outside of work hours. Maybe I'm wrong and there's not a strong impact, but I haven't managed to find too many negative impacts of such a pursuit.

No comments:

Post a Comment

Mind Crime: Part 10

    Standing atop the grave of humanity, smugly looking down, and saying "I told you so," is just as worthless as having done noth...