Thursday, April 20, 2023

The World of Finance vs the World of AI

     Let's take a look at the financial landscape real quickly. There are many mutual funds, hedge funds, commercial banks, and investment banks. The industry is awash with regulation, except for the world of hedge funds which is held to much less stringent standards. The profit incentive in the financial sector is huge. Not only can firms make money, but employees and managers can pull salaries in the millions, and the head of a trading firm can quickly become a billionaire. The way in which they do this is obscure, but oftentimes it is through better technology, and even more often (in my opinion) it is because of cheating and unethical behavior. Market manipulation, insider trading, and straight up stealing are hard to prove and even harder to prosecute. There are plenty of real world examples of pathetic, unethical slime (such as Steve Cohen) who massively cheat the system and make billions. Often times, many of the financial firms profit by cheating in smaller ways, such as stealing from customers (Wells Fargo) or charging insane fees without providing any tangible value (pretty much every hedge fund and most actively managed mutual funds). If institutions were less greedy and understood survivorship bias, most of these quacks would go out of business. Why mention the financial sector? Because I believe it is a good window into the future of AI companies. Greed will drive a lot of decisions, safety will take a backseat, and regulations will be helpful but drastically flawed.

    Which financial institution do you, as an individual, fully trust? Goldman Sachs? Morgan Stanley? Do any of these institutions have your best interest at heart, and would you trust them with your child's lives? Of course not. Unfortunately, you should apply the same line of thinking when you look at Google, Microsoft, and even OpenAI. No matter what sort of marketing pitch a company gives, a company is a company. Shareholders demand growth, and the principal agent problem reigns (the management of a company is self-interested and acts in their own self-interest, not in the interest of shareholders or customers). We worry a lot about agency problems within AI systems, but we should worry in addition about agency problems at all AI labs. I don't care if your company is for profit or not, developing AGI would make you one of the most important human beings of all time, give you an indefinite legacy, and make you absurdly powerful. Maybe you aren't automatically the richest individual in the world (because of some dumb cap at 10,000x profit), but you are instantly one of the most powerful individuals of all time. Whatever Sam Altman says, he is perfectly incentivized to push towards AGI. As is every CEO of every future AI lab, regardless of what they say.

    As in finance, regulation will help the world of AI to be fairer and more transparent. However, the outcome will be shoddy, as in any industry driven by such a massive profit motive. Some insanely intelligent, generally trustworthy Nobel Prize winning financiers started a hedge fund called Long Term Capital Management. Despite their brilliance and rapid journey to wealth and success, the company eventually collapsed into a ball of flames and almost caused a global financial meltdown. I view every group of intelligent individuals (OpenAI included) in the same way. Maybe they are really smart, and maybe they are not trying to cause harm, but we have seen history repeat itself too often. Instead of a financial collapse, power hungry AI companies could cause mass suffering and death. They might have the right intentions, and they might all be Nobel Prize winners. At the end of the day, none of that really matters.

    Is there a point to this comparison? Something we can learn? I think so. Intelligent regulations can lessen the probability of financial collapses, and I believe the best form of AI regulations can prevent many low-hanging-fruit problems that will come with the development of AGI. Also, every finance company has a compliance department, and AI companies will likely need similar departments to function and keep up with regulation (probably called "AI safety" or something). But something else evolved after the financial crisis, the emergence of internal risk departments in investment firms and banks. These risk departments made sure that the firms were not taking on too much risk and were adequately diversified and liquid. The combination of compliance and risk departments at investment firms ensure that the firms themselves stay afloat and protect customers, and they also protect the society from financial contagion. Establishing risk departments within AI labs is very necessary, especially if they collaborate and openly share the ways in which they have avoided catastrophic problems. If we want to plan well for AI regulation, we shouldn't look to the technology industry, where largely the government has failed to do anything of use. We should pretend the year is 1900 and we want to plan out the best incentive structure and regulations for the finance world for the next two hundred years. Yes, a recession or two might happen, maybe even a depression. But maybe with the right incentives we can avoid something worse.

No comments:

Post a Comment

Mind Crime: Part 10

    Standing atop the grave of humanity, smugly looking down, and saying "I told you so," is just as worthless as having done noth...