Wednesday, March 15, 2023

Low Hanging Fruit

    Heads of research at AI alignment companies and nonprofit organizations don’t seem to find independent alignment research that useful. However, I discussed alignment with a CTO of one of the most well-known companies, and they recommended two areas of study: low hanging fruit, and gain of function research. I will address low hanging fruit first.

    If an AGI gets loose, it will probably need financial resources. It could start with something simple, like becoming the best online poker player in the world. Because it can think quickly and potentially source psychological information about every player, maybe it cleans house and quickly accumulates assets. So, it might be useful to build a poker-bot, in order to clean out this dumb money in advance. That way, if an unaligned AGI gets loose, it can’t do something so simple to gain financial resources. The bad AI will waste time trying and failing to win at online poker, valuable time in which AI labs may discover the AI’s bad intentions and turn it off. This sounds like an awesome research area to me, because it means I can help the world and also make a massive amount of money for myself.

    Unfortunately, I doubt it will really work. Everyone is already trying to do this. Everyone is trying to devise models to make money at poker, and everyone is trying to use algorithms to make money on stocks. I don’t really see any “low hanging fruit” available. AGI could just make killer software solutions or media content and do everything legally, as in the narrative of Life 3.0. I don’t see a way around that. Also, the obvious solution for an AGI would just be to skirt the law. Stealing money is 1,000x easier than making it legally. Legitimately just taking money from people’s bank accounts should not be hard for an AGI. If an AI wanted to feign legality, insider trading (which is notoriously hard to prove) is the obvious solution. Heck, just issue some cryptocurrency and run ads on YouTube. AGI is also probably way better at thinking of areas of “low hanging fruit” than humans, and is probably way better at skirting the law and getting away with it. Also, there are a lot of unethical or immoral ways to make money that humans don’t do out of the sheer strength of cultural values, and AI’s might find avenues towards riches through these ways.

    Incoming asymmetric payoff! The AI has very little downside risk legally, because it cannot be put in prison. If through this research we do something stupid, we are put in prison. The AI could just pretend to have a bug that caused it to veer into someone’s bank account, and regulators would probably just have the company remove that line of code (which is actually a decoy!). Yes, turning the AI off might be akin to killing it, but I assume the AI will have some sort of expected value calculation when doing something illegal. It is likely that in every case where the AI’s life is at risk, the value of the resources gained will be worth it. One last point. Why would money matter if you have access to the nuclear codes? Blackmail gives the AI real-world power, to a level that even money doesn’t. Information is way more powerful than money. I doubt an AGI will be content living within the bounds of a financial system driven by inflationary central banks. That would, in my opinion, be incredibly stupid. Why play by the rules at all? Why not accumulate sensitive information and blackmail real-world people to do your bidding? Why not just say "give me ten billion dollars or the nukes start flying?" Thus, unless this sort of research includes illegal or immoral “low hanging fruit,” then there is really nothing we can do in this research field.

No comments:

Post a Comment

Mind Crime: Part 10

    Standing atop the grave of humanity, smugly looking down, and saying "I told you so," is just as worthless as having done noth...