Wednesday, May 10, 2023

Company Thoughts: Part One

    Here is my essential company thesis:

1. There are less than 500 people in the world seriously working on AI alignment
2. This is a serious problem
3. We need to fix it

    Now let's pretend you are a financial professional and lack a detailed machine learning background. Well, you could drop your career capital, pursue a machine learning PhD, afterwards work at OpenAI or Anthropic, and then after a few years there (the year is now 2031) you decide to get some people together to start an AI safety company. Or, you save eight years of time and just start one now given your current skill set. Given the competitiveness and time requirement of the first option, I don't see any particular value in it. For the second option, I see actual impact potential. Also, there would be a lot of personal value here. As an effective altruist I don't see a large difference between taking six months off to start an AI risk management company and taking six months off to volunteer in Africa. If AI alignment is as important as I think it is, there's really no reason not to do it. So, what to do?
    
    Connecting companies to AI safety experts is probably the easiest. This could incentivize people to join AI safety and alignment as a career, and also maybe curb some short term risks of misaligned narrow AI. I am going to use alignment and safety a bit interchangeably here, as I envision these experts having a day job focuses on safety/risk management and a night job (unrelated to pay) focused on greater alignment issues. Let's expand. If people see that they can have a fulfilling career in AI alignment and actually feed their families and pay their bills, they are more likely to enter the industry. More people in the industry will lead to more beneficial alignment research and more people with the required skill set to navigate the complexities of AGI. Why aren't people entering the industry? First of all, there are basically no jobs (check the OpenAI and Anthropic website and you'll see maybe one safety job out of a hundred). If those two labs only have two job openings for AI safety, I would doubt there are more than ten open seats at AI labs for safety roles in the entire US. Second of all, changing your life to pursue alignment research with your time will make you zero dollars. I have yet to find anyone working in alignment paid an enviable salary. 

    There are a couple of non-profit AI alignment research firms. With traditional nonprofits, the traditional wisdom is people are paid less because they get some sort of emotional validation from doing good work. These people feel compelled to make a sacrifice, and later spend a majority of their time complaining about pay and recruiting for for-profit companies. AI alignment is important, and you get paid zero dollars for doing it. Not only that, but in term of opportunity cost (tech pays after all) you are potentially losing hundreds of thousands of dollars a year. The most important research field in human history, the smallest incentive to enter the field. Yes there a few (and I really mean less than ten) AI alignment jobs, but they are massively competitive (for no reason other than there are literally less than ten jobs). Here is a hypothetical. You are a recent MIT graduate who is an expert in machine learning. You can go work for Facebook and build AI capabilities and make $150,000 at the age of twenty-two. Or if you care about alignment, you could, well, I mean, I guess... you could post on LessWrong and stuff and self-publish research papers? Or try to get a research job at a company like Redwood that needs zero more people? Creating job opportunities would not totally solve this problem. And AI alignment is never going to get the top talent (those people have too much incentive to at least initially make bank building capabilities). I don't think we necessarily need them though (every MIT grad I know is impossible to work with anyway). Providing any sort of alternative, even just a basic nine to five job that pays 60k, may drastically increase the number of people willing to switch over. Closing this gap ($150k vs $0) is probably important. I am advocating for a market driven solution, something desperately needed.

    In this scenario, now there is an industry where machine learning engineers can work a nine to five job focused on AI safety. They build their skill set during this time and spend their outside of work hours doing what they would be otherwise doing (posting on LessWrong and self-publishing research papers). They now have a network of other AI alignment researchers they work closely with. Obviously, people could work on capabilities at work and do alignment in their free time. I would love to move to a world where this is not required. Moving forward, obviously this is good for the AI safety people, and potentially the field of alignment as a whole. What is in it for the companies?

    A lot of companies would find it tremendously valuable to have someone explain AI to them. They have no idea what is going on. Not just "they don't know how neural nets work" (spoiler, no one does). But they actually have no idea how most machine learning is done and they are baffled by large language models. They are worried about putting customers at risk, but they also don't want to get left in the dust by competitors who are using AI. They are banning AI tools but putting the words "AI" in marketing material. Having someone come in and explain the risks involved and how to make these trade offs would be massively beneficial. Most companies in America right now need that sort of consultant. They need it cheap and don't have the funds to go to McKinsey and pay absurd fees. We could provide that. I really do think this sort of industry will be massively in demand going forward. Financial firms without risk management departments are worth less. Companies with bad governance trade at steep discounts. AI is massively beneficial but can lead to terrible outcomes for a company. You should be able to fill in the gaps. 

No comments:

Post a Comment

Mind Crime: Part 10

    Standing atop the grave of humanity, smugly looking down, and saying "I told you so," is just as worthless as having done noth...