Friday, May 26, 2023

Utopia Now

    There is an argument for increasing the rate of AI progress. Maybe the probability of other ex-risks are too high, and we simply cannot wait around for another 100 years. If nuclear war was destined to happen within the next ten years, I am certain that we would be pushing as fast as possible towards AGI. In some sense, your drive to be reckless is highly correlated with your pessimism regarding where things are going. If you think humanity is on a great linear trajectory towards utopia, there is no use in throwing in random variables that can mess things up. If AGI has a 10% chance of killing us, and you are fairly certain that in two hundred years the human race will be flourishing, probably not worth developing AGI. If you are pessimistic about humanities prospects, maybe we take the 10% risk.

    The world is full of authoritarian governments that are doing terrible things. Two of the three military superpowers (Russia and China) have horrible human rights track records and have a strong drive towards increasing power and influence. Russia invaded a sovereign country recently, and China is doing very, very bad things (oppression of Uyghurs, Hong Kong takeover, general police state tendencies). The West is constantly on the brink of nuclear war with these countries, which would result in billions of deaths. Chemically engineered pandemics become both more likely and more dangerous over time. The barriers to creating such viruses are being knocked down and the world is becoming more and more interconnected. What are our odds? If our odds of dying off soon are great, or if it will take us a long, long time to reach a place where most humans are free and thriving, maybe we make the trade. Maybe we decide that we understand the risks, and push forward. Maybe we demand utopia, now.

    Well, there is another problem with AI: suffering risk. This is not often discussed, but there is a very real possibility that the development of transformative AI leaves the world in a much, much worse place than before (ex: ASI decides it wants to torture a bunch of physical people or simulate virtual hell for a bunch of digital minds for research purposes). Another factor in your AI hesitancy should your estimated probability of a perpetual dystopia. This is where I differ from other people. I believe that the risk of things going really, really wrong as a result of AI (worse than AI simply killing everyone) is massively understated. We should hold off on AGI as long as possible, until we have a better understanding of the likelihood of this risk.

Monday, May 22, 2023

The Future of Freedom

    The dawn of AGI is near. What this means for the world is uncertain, but if you follow Nick Bostrom's logic it seems clear that ASI will be soon to follow. This will have a more clear result: the human race will no longer be the supreme being on the planet. We talk a lot about utopia when we discuss ASI. We discuss the ways in which it could cure disease, expand lifespans (potentially indefinitely), and colonize the galaxy. We also discuss value lock in, and the possibility for authoritarian dystopias. In every case, we see some version of either utopia or dystopia, all with one thing in common: a single entity making the decisions. Similar to a world government, our eventual ASI will likely control our lives and the bounds in which we live. I rarely see discussion of a libertarian utopia, where each individual receives private property and is allowed to do whatever they want so long as they are not impacting others in a negative way. I am not quite sure how this will work in a post-scarcity society. We are in the age of transformative AI, and I am very worried about human freedom. The right to make the wrong decisions is important, as it is often the only way to discern the right ones.

    Will ASI adhere to a bill of rights? It seems that this list of unalienable rights was crucial in the formation of the United States. Freedom often comes at a price. The second amendment absolutely equates to more individual freedom, at the expense of many needless deaths. Will the ASI respect these types of rights (freedom of speech, right to bear arms), even if in aggregate they could hurt society (hate speech, mass shootings). In the event of a chemically engineered pandemic, will the ASI force vaccinations at gunpoint in order to ensure the survival of the human race? I am very, very worried that the coming age of AI will naturally lead to autocracy. Time and time again we have seen history repeat itself, with "ends justify the means" and "for the greater collective good" leading right into fascism. I worry the technocratic and socialism-inclined minds may win out over the libertarian. Personal political beliefs aside, I think the former will inherently place less value on freedom and will be more likely to through good intentions force a bad outcome.

Thursday, May 11, 2023

Mind Crime

    Humans are really, really bad at planning in advance to not be monsters. We have a pretty horrible ethical track record. Genocide and slavery seem to come pretty easily to most of us, given the right time period and circumstances. If there are internalized morals, we sure took our sweet time finding them. Generally, I don't think humans are in a position to make rational, ethical choices involving other conscious beings. Regardless of your take on factory farming, it is pretty clear we didn't spend decades deliberating the ethical issues in advance. Have you fully thought through the moral implications of factory farming, or are you just along for the ride? I am very worried that unaligned superintelligence will kill all of humanity, or enslave us, or torture us, or become authoritarian and lock in terrible values for eternity. Still, I am also worried about mind crime. 

    Look at our track record with slavery. Read about the recent Rwandan genocide. Look at the various authoritarian regimes and staggering human rights abuses across the planet. But don't worry, we will somehow care a lot in advance about the moral rights of artificial intelligences. From the industry that brought you social media, and don't worry they totally thought through and predicted any negative ramifications of the technology and have your best interest at heart, here is the new god! And don't worry we will treat it well and we totally won't be enslaving a morally significant being.

    If we gain the ability to generate millions of digital minds, we gain the capacity for horrors worse than any genocide or slavery in humanity's past. We might not even do it on purpose, but just through sheer ignorance. It took a long time for people to treat other humans as morally significant. And by long time I mean basically until fifty years ago in the U.S., and in many other countries this is still not the case. It isn't crazy to imagine that we will treat "computers" much worse. Mind crime will have to legislated early. If you knew slavery was about to become legal again in twenty years in the U.S., what policies would you put in place? How would you get ahead of the problem and ensure that morally significant beings aren't put in virtual hell? These are the questions we should all be asking.

The World Will End Because Math is Hard

    Every machine learning book I read leaves me baffled. How on earth can anyone understand this stuff? Not at a surface level, but how can anyone really master statistics/probability/calculus/linear algebra/computer science/algorithms to a degree where they actually understand what all the words in these 1,000+ page books mean? Even a summary book such as the "The Hundred-Page Machine Learning Book" leaves me with more questions than answers. Now to learn all of that, and then try to layer on the required decision theory/economics/ethics/philosophy to a level where you can have a positive impact on AI alignment seems pretty unreasonable. A lot of people pick a side, either specializing in cutting edge deep learning frameworks or armchair philosophizing. The AI capability people tend to underestimate the required philosophical complexity of the problem, and the AI ethics people tend to completely misunderstand how current machine learning works. Maybe there are a few that can master all of the above subjects, but it is more likely that a combination of people with deep expertise in disjointed areas will provide better solutions. It is pretty clear that I will not be one of the individuals who invents a new, more efficient learning algorithm or discovers a niche mathematical error in a powerful AI product. Focusing on AI risk management, a massively underdeveloped industry, is probably the way forward for me. The math is simply too hard, maybe for everyone. But someone is writing the books. If we can get a few people who understand the complexity of the issue into the right positions, maybe we can cause some good outcomes.
    
    One of the benefits of focusing on risk management is that you can make money and not feel guilty about it. "Oh no, people working on AI safety are making too much money." Have you heard that before? I for sure haven't, and I would like to. To someone that believes in markets, that statement rings similar to "oh no, people are going to be massively incentivized to have a career in AI safety." What a problem that would be. Also, competition isn't even a bad thing, an arms race towards safer products would be quite interesting. "Oh no, China is catching up and making safer AI systems than the US." I would pay to hear that. Obviously, sometimes alignment is really capabilities in disguise. I have touched on this previously, but deciding what exactly makes systems safer and what makes systems more powerful is pretty hard.

    I briefly pitched Robert Miles a few weeks ago on some of my ideas. Mainly an AI risk management industry that will provide more profitable employment opportunities for alignment researchers. His response:

"I guess one problem is the biggest risk involves the end of humanity, and with it the end of the courts and any need to pay damages etc. So it only incentivizes things which also help with shorter term and smaller risks. But that's also good. I don't have much of a take, to be honest."

    I am a newbie to this field and Robert is the OG (someone who understands the entire stack). His take is entirely fair, as companies will only be incentivized to curb short term risks where they will be affected. The elephant in the room is obviously the end of humanity or worse. People that don't see this as feasible simply need to read "The Doomsday Machine" by Daniel Ellsberg. All this talk of nanotechnology makes us miss the obvious problem that we are a hair's breadth away from worldwide thermonuclear war at every moment. I wonder how things will change when a powerful, unaligned AI starts increasing its hold on such a world. Longtermists drastically undervalue the terror of events that kill 99% of people instead of 100%. In regards to long term AI alignment, I think the number of researchers will matter, and I hope people in the AI safety industry would be incentivized to study long term alignment outside of work hours. Maybe I'm wrong and there's not a strong impact, but I haven't managed to find too many negative impacts of such a pursuit.

Wednesday, May 10, 2023

Company Thoughts: Part One

    Here is my essential company thesis:

1. There are less than 500 people in the world seriously working on AI alignment
2. This is a serious problem
3. We need to fix it

    Now let's pretend you are a financial professional and lack a detailed machine learning background. Well, you could drop your career capital, pursue a machine learning PhD, afterwards work at OpenAI or Anthropic, and then after a few years there (the year is now 2031) you decide to get some people together to start an AI safety company. Or, you save eight years of time and just start one now given your current skill set. Given the competitiveness and time requirement of the first option, I don't see any particular value in it. For the second option, I see actual impact potential. Also, there would be a lot of personal value here. As an effective altruist I don't see a large difference between taking six months off to start an AI risk management company and taking six months off to volunteer in Africa. If AI alignment is as important as I think it is, there's really no reason not to do it. So, what to do?
    
    Connecting companies to AI safety experts is probably the easiest. This could incentivize people to join AI safety and alignment as a career, and also maybe curb some short term risks of misaligned narrow AI. I am going to use alignment and safety a bit interchangeably here, as I envision these experts having a day job focuses on safety/risk management and a night job (unrelated to pay) focused on greater alignment issues. Let's expand. If people see that they can have a fulfilling career in AI alignment and actually feed their families and pay their bills, they are more likely to enter the industry. More people in the industry will lead to more beneficial alignment research and more people with the required skill set to navigate the complexities of AGI. Why aren't people entering the industry? First of all, there are basically no jobs (check the OpenAI and Anthropic website and you'll see maybe one safety job out of a hundred). If those two labs only have two job openings for AI safety, I would doubt there are more than ten open seats at AI labs for safety roles in the entire US. Second of all, changing your life to pursue alignment research with your time will make you zero dollars. I have yet to find anyone working in alignment paid an enviable salary. 

    There are a couple of non-profit AI alignment research firms. With traditional nonprofits, the traditional wisdom is people are paid less because they get some sort of emotional validation from doing good work. These people feel compelled to make a sacrifice, and later spend a majority of their time complaining about pay and recruiting for for-profit companies. AI alignment is important, and you get paid zero dollars for doing it. Not only that, but in term of opportunity cost (tech pays after all) you are potentially losing hundreds of thousands of dollars a year. The most important research field in human history, the smallest incentive to enter the field. Yes there a few (and I really mean less than ten) AI alignment jobs, but they are massively competitive (for no reason other than there are literally less than ten jobs). Here is a hypothetical. You are a recent MIT graduate who is an expert in machine learning. You can go work for Facebook and build AI capabilities and make $150,000 at the age of twenty-two. Or if you care about alignment, you could, well, I mean, I guess... you could post on LessWrong and stuff and self-publish research papers? Or try to get a research job at a company like Redwood that needs zero more people? Creating job opportunities would not totally solve this problem. And AI alignment is never going to get the top talent (those people have too much incentive to at least initially make bank building capabilities). I don't think we necessarily need them though (every MIT grad I know is impossible to work with anyway). Providing any sort of alternative, even just a basic nine to five job that pays 60k, may drastically increase the number of people willing to switch over. Closing this gap ($150k vs $0) is probably important. I am advocating for a market driven solution, something desperately needed.

    In this scenario, now there is an industry where machine learning engineers can work a nine to five job focused on AI safety. They build their skill set during this time and spend their outside of work hours doing what they would be otherwise doing (posting on LessWrong and self-publishing research papers). They now have a network of other AI alignment researchers they work closely with. Obviously, people could work on capabilities at work and do alignment in their free time. I would love to move to a world where this is not required. Moving forward, obviously this is good for the AI safety people, and potentially the field of alignment as a whole. What is in it for the companies?

    A lot of companies would find it tremendously valuable to have someone explain AI to them. They have no idea what is going on. Not just "they don't know how neural nets work" (spoiler, no one does). But they actually have no idea how most machine learning is done and they are baffled by large language models. They are worried about putting customers at risk, but they also don't want to get left in the dust by competitors who are using AI. They are banning AI tools but putting the words "AI" in marketing material. Having someone come in and explain the risks involved and how to make these trade offs would be massively beneficial. Most companies in America right now need that sort of consultant. They need it cheap and don't have the funds to go to McKinsey and pay absurd fees. We could provide that. I really do think this sort of industry will be massively in demand going forward. Financial firms without risk management departments are worth less. Companies with bad governance trade at steep discounts. AI is massively beneficial but can lead to terrible outcomes for a company. You should be able to fill in the gaps. 

Mind Crime: Part 10

    Standing atop the grave of humanity, smugly looking down, and saying "I told you so," is just as worthless as having done noth...