Sunday, April 23, 2023

Music, Movies, and the New Wild West

     In a previous post, "How Important Are Humans," I mentioned an argument I had with a close friend about AI generated art. My conclusion was that if AI ends up writing better books, creating better art, and making better movies, I will have no problem switching over to AI creations completely. Why would I read a 7/10 book when I can read a 10/10 book? At some point, the quality of the content is really all that matters. Well, within two weeks this has pretty much come to fruition. The quality of AI content has exploded, especially within the music landscape. The song "Heart on My Sleeve" by Drake and The Weeknd made waves in the music world, as it is completely AI generated and unrecognizable as AI. All week, I have been listening to AI music pretty much exclusively. I also listed to AI generated stand up comedy and watched some crazy-accurate deepfake videos. There are some cool applications of all of this.

    In the near future, the voices of singers, faces of actors, and writing style of writers will be replicable for free. Before I go for a run, I will be able to create a new Kendrick Lamar album (his voice, his cadence, his songwriting ability) within seconds. During my run, if I don't think his voice fits the song, I can switch the artist to Nas and the transition will be seamless. If I am watching a movie and don't like a particular actor, I will be able to quickly toggle the movie so that Danny DeVito is now playing that role. What will this all mean? Well, obviously we will probably have a lot of pressing legal issues to figure out. I am guessing this will regress a bit in spirit back to the days where everyone paid for music, and thus everyone illegally downloaded music for free on LimeWire. There will be a massive black market for AI generated songs and movies that steal the image and likeness of people without their consent. The most popular singers and actors will become more popular as they are featured heavily in this content, while those entering the industry will have essentially zero value. In a world when the most loved actor in the world can play a role in every single major film of the year, we don't need more actors. With no scheduling conflicts and no actual work required, I would guess that the traditional acting and music industries are essentially going to die. Live performances will still have a niche, but there will also be AI created characters and singers that will start taking some of the spotlight. Characters that are the perfect representation of an idea or personality, without any of the baggage or time requirements that plague real-world humans.

    Think back to the Wild West for a second. You could shoot someone in a bar, drive three towns over, and as long as no one saw you commit the actual murder it was essentially impossible to prove. A serial killer in the 1800's was essentially unstoppable, as there was no DNA evidence, and again, without any direct witnesses there would be no conviction. Even then, if there was a witness how the heck would any authority reliably track you down? If you want to imagine this world I would recommend reading "The Devil in the White City." We may be backtracking to this stage of life. Video and audio evidence in a world of indistinguishable deepfakes is basically worthless. I know no way of determining if a top-level deepfake is real or not, and given that a video is just a sequence of pixels there will probably no way to actually distinguish true reality. As a result, eyewitness testimony, as flawed as it is, will probably regress to being the primary form of evidence. If we can reliably trick cameras in indistinguishable ways, this means that a surveillance state driven by video and audio monitoring is less useful. Unfortunately, there are likely biometric equivalents that an authoritarian state will think up (you are now tagged with an imbedded GPS since we can't trust our cameras).

    Overall, I don't think that these new developments makes society any safer or more stable. There are now incredibly convincing disinformation tools, and I really don't know how I will trust anything I read or see going forward. Still, listening to young Taylor Swift sing her new album was cool. And some of the AI content is legitimately hilarious. If the world burns, at least we will all be laughing. Nothing makes an apocalypse more palpable than good content. 

Friday, April 21, 2023

Time to Start a Company

     Alright, well I thought about it and autonomous agents are insane. It is pretty obvious that within a decade pretty much every single company in the United States will be using AI agents for various tasks. As I mentioned before, finance companies have risk departments that prevent individual firm collapses and industry-wide financial contagion. The fact that current companies don't have AI risk management departments is not surprising, but soon it will seem ludicrous. Within a decade, every company in the US will be using multiple AI agents. They will have to, less they lose out to competitors who are employing this transformative technology. Again, the incentives are simply much too high. The AI market will be saturated with competitors trying to make the next ChatGPT, but none will be focused on the most important part of it all: risk. Providing risk solutions rather than capability solutions is an untapped area of the market. If you run an autonomous agent, horrible things could happen. Customer data could be leaked, the AI could break various laws, or you could accidently make a lot of paperclips. Companies are terrified of risk, terrified that all of their hard work and credibility will be wiped away. And it will happen, it will happen to a few companies and it will be well publicized. But companies won't stop, because they can't. They are driven to survive and make profits, and they will underestimate the risk (as does every investment firm, and they have risk departments!).

     Insert AIS, a company that delivers risk mitigation tools and access to AI experts. Customized software platforms that estimate risk and pose solutions, or some other product I haven't thought of. Probably the easiest solution is to outsource AI researchers as consultants who look over a company's plans and provide feedback. I would not target the business of the massive players who already have AI safety groups, are rapidly building capabilities, and are aligned with gargantuan profit-driven tech giants (OpenAI, DeepMind, Anthropic). Rather, AIS would service the 99.9% of other companies in the world that are going to dive in, safety or not. 

    There is a moral hazard here. You don't want to "rubber stamp" companies and give them a false sense of security. You don't want to convince companies that otherwise would have sat out on AI to participate, because they will gladly place the blame on you and justify their uninformed decisions with your "blessing" as a backing. So this will not be an auditing firm, verifying any sort of company legal compliance or justifying behavior. Those should all be internal. Rather, it will be providing systems and knowledge to build safer and more collaborate AI. Again, these are the small fries. I am less concerned about a mid-tier publishing company building the paperclip machine, and I am convinced that they are less likely to do so if they have a risk management system. 

    The most remarkable aspect of this idea is that even if someone else adopts it and creates a superior risk solution, it is a win-win scenario. Increased competition fosters innovation, and being the first mover in this space could ignite the creation of an entire industry. An industry that I am convinced will probably make things better, or at least not make things worse. If I am instantly replaced by a more capable CEO or another company develops awesome alignment solutions, all the better for humanity. I'll gladly return to an easy lifestyle with no skin in humanity's game.

    Another remark. The Long Term Future fund (the largest fund which funds initiatives that combat ex-risk) is only $12 million dollars. That is ridiculously small. In the world of finance, that is a rounding error to $0. There are only a few hundred AI alignment researchers, and they are definitely not paid well. At this point, AI alignment is similar to other non-profit work: you are expected to make a massive financial sacrifice. Working on capabilities research will feed your family, working on AI alignment will not. As a result, there is really no incentive to go into the most important research field of all time. This needs to change. I think creating AIS will kick off a market-driven solution to this problem. People that become experts in interpretability and corrigibility and come up with novel alignment solutions will have massive value. I would pay them handsomely to work with risk mitigation for various companies, and as a result we will incentivize more individuals to enter the space. If they work forty hours a week and make a decent salary, they can spend the entirety of their time outside work contributing to the long-term value alignment cause. I don't see many downsides here, outside of the massive personal and career risk I would accumulate as a result. Well, seems at least interesting though. Would be a pretty noble way to end up on the streets. "Hey man can you spare a dollar, I blew all my savings trying to align transformative AI with human values." Would at least make for a cool story. Guess its time to start a company.

Autonomous Agents

     I used AutoGPT for the first time today, an early entry into the world of autonomous AI agents that can make plans and solve problems. From my understanding, AutoGPT has an iterative loop that permits the AI to learn and adapt as it works to an objective. It has short and long term memory and is able to break down a prompt into multiple steps and then work towards progressing through each of those steps. Again, I am not terrified of current AI technology. I am terrified that current AI technology will improve, which it will. For AutoGPT, you simply put in the goal of the AI agent, such as "make me a bunch of money," and then a few sub goals, such as "search the web to find good companies to start" and "keep track of all of your research and sources and store them in a folder." It doesn't work well at the moment, but it has only been out for a couple of weeks. The promise of autonomous agents is clear. Many white collar jobs can be replaced, and individuals could become much more productive. Research and administrative work will become much easier, and there is a massive incentive to have a smarter agent than your competition. Every advance in AI increases my conviction that we should lean heavily on AI agents to do alignment research. This year really has been quite the revolution.

    The speed at which these developments keep coming is paralyzing. I am further convinced that alignment is important, as now every person on Earth will have access to prompting technology that can actually do destructive things in the real world. Anyone can create a website or a business without any technical knowledge, and everyone is vulnerable to whatever sort of chaos this causes. AutoGPT requires a user to prompt "yes" or "no" before it moves forward with real-world interaction, such as scraping a bunch of websites or moving files around. Future agents will not have this, or if they do I really do not see how it will be useful. I just kept clicking yes, with no clue if AutoGPT would follow the robots.txt policies of a website (that determine if you are even allowed to scrape the website). I've built my own web scrapers, and I clearly didn't have the wisdom to walk away from the curious prompt "hey AI agent, increase my net worth" even though I had no clue what the AI would end up doing. How are non-technical people supposed to weight any of these trade offs? Most people probably won't even know that there are laws or policies that they could be breaking, and they are probably liable to whatever their autonomous agent does. The cost of running these agents is already super low (today cost me 8 cents), and as competition heats up it will be virtually free. Saying that this is a legal nightmare is an understatement. 

    Users will clearly have no idea what their agent is doing, and they probably won't care. The chaos that these point-and-click machines will have is unknown, but it is clear that if they are unaligned they could cause a lot of damage. For example, you prompt "make me a lot of money" and the AI illegally siphons money away from a children's hospital because that it outside of its objective function. What I want to emphasize here though, is even aligned AI can be really, really bad. Because a scammer can say "create a Facebook pretending to be my target's uncle, generate a bunch of realistic photos of the uncle, build up a bunch of friends, and then reach out to the target claiming to be the uncle. Say that you are in trouble and need money. Leave realistic voice memos. Do whatever else you think could be convincing." The AI agent will read that, develop a plan, and then break that plan down into discrete steps. Then it will iterate through each one of the steps and execute the plan. Fraud and deceit become easy. And cheap. Simpler example: a terrorist uses a perfectly aligned agent and says "cripple the US financial system." Even if this agent totally understands the terrorist's intentions, the outcome will be very bad. Even just pursuing the first few steps of this goal could cause a lot of damage. It is probably better if all of these autonomous agents in the future are perfectly aligned, but we shouldn't celebrate that necessarily as a victory. Agents can be aligned to the wrong values. The genie problem mentioned in a previous post rings even truer now. May the person with the most powerful genie win.

Thursday, April 20, 2023

The World of Finance vs the World of AI

     Let's take a look at the financial landscape real quickly. There are many mutual funds, hedge funds, commercial banks, and investment banks. The industry is awash with regulation, except for the world of hedge funds which is held to much less stringent standards. The profit incentive in the financial sector is huge. Not only can firms make money, but employees and managers can pull salaries in the millions, and the head of a trading firm can quickly become a billionaire. The way in which they do this is obscure, but oftentimes it is through better technology, and even more often (in my opinion) it is because of cheating and unethical behavior. Market manipulation, insider trading, and straight up stealing are hard to prove and even harder to prosecute. There are plenty of real world examples of pathetic, unethical slime (such as Steve Cohen) who massively cheat the system and make billions. Often times, many of the financial firms profit by cheating in smaller ways, such as stealing from customers (Wells Fargo) or charging insane fees without providing any tangible value (pretty much every hedge fund and most actively managed mutual funds). If institutions were less greedy and understood survivorship bias, most of these quacks would go out of business. Why mention the financial sector? Because I believe it is a good window into the future of AI companies. Greed will drive a lot of decisions, safety will take a backseat, and regulations will be helpful but drastically flawed.

    Which financial institution do you, as an individual, fully trust? Goldman Sachs? Morgan Stanley? Do any of these institutions have your best interest at heart, and would you trust them with your child's lives? Of course not. Unfortunately, you should apply the same line of thinking when you look at Google, Microsoft, and even OpenAI. No matter what sort of marketing pitch a company gives, a company is a company. Shareholders demand growth, and the principal agent problem reigns (the management of a company is self-interested and acts in their own self-interest, not in the interest of shareholders or customers). We worry a lot about agency problems within AI systems, but we should worry in addition about agency problems at all AI labs. I don't care if your company is for profit or not, developing AGI would make you one of the most important human beings of all time, give you an indefinite legacy, and make you absurdly powerful. Maybe you aren't automatically the richest individual in the world (because of some dumb cap at 10,000x profit), but you are instantly one of the most powerful individuals of all time. Whatever Sam Altman says, he is perfectly incentivized to push towards AGI. As is every CEO of every future AI lab, regardless of what they say.

    As in finance, regulation will help the world of AI to be fairer and more transparent. However, the outcome will be shoddy, as in any industry driven by such a massive profit motive. Some insanely intelligent, generally trustworthy Nobel Prize winning financiers started a hedge fund called Long Term Capital Management. Despite their brilliance and rapid journey to wealth and success, the company eventually collapsed into a ball of flames and almost caused a global financial meltdown. I view every group of intelligent individuals (OpenAI included) in the same way. Maybe they are really smart, and maybe they are not trying to cause harm, but we have seen history repeat itself too often. Instead of a financial collapse, power hungry AI companies could cause mass suffering and death. They might have the right intentions, and they might all be Nobel Prize winners. At the end of the day, none of that really matters.

    Is there a point to this comparison? Something we can learn? I think so. Intelligent regulations can lessen the probability of financial collapses, and I believe the best form of AI regulations can prevent many low-hanging-fruit problems that will come with the development of AGI. Also, every finance company has a compliance department, and AI companies will likely need similar departments to function and keep up with regulation (probably called "AI safety" or something). But something else evolved after the financial crisis, the emergence of internal risk departments in investment firms and banks. These risk departments made sure that the firms were not taking on too much risk and were adequately diversified and liquid. The combination of compliance and risk departments at investment firms ensure that the firms themselves stay afloat and protect customers, and they also protect the society from financial contagion. Establishing risk departments within AI labs is very necessary, especially if they collaborate and openly share the ways in which they have avoided catastrophic problems. If we want to plan well for AI regulation, we shouldn't look to the technology industry, where largely the government has failed to do anything of use. We should pretend the year is 1900 and we want to plan out the best incentive structure and regulations for the finance world for the next two hundred years. Yes, a recession or two might happen, maybe even a depression. But maybe with the right incentives we can avoid something worse.

Wednesday, April 19, 2023

Solving Alignment Would Be Terrible?

    If everyone in the world was given a genie that granted three wishes, everything would fall apart. Even if there were no "monkey's paw" problems, and every single person's true intention was granted, chaos would be the only outcome. I'd wish for "make me a million dollars, legally." Someone else would wish for "steal ten million from J.P. Morgan and make it untraceable." Another would wish for "push through legislation that would make it illegal to fish." Plenty of wishes would contradict and the war would be won by the people with the most powerful genies. Regardless, society as we know it would collapse. This is why I'm wondering if solving alignment may actually be a horrible thing to do right now. Not the problem of finding the objective moral values of the universe and embedding them into all AI, but rather the problem of making an AI follow along with your arbitrary values (also called "wishes"). In a world of aligned AGI that can replicate, if every person is given a personal AGI, absurdity begins. The same wishes are pursued. Labor costs are now essentially zero, and the only real winners are the people with the most powerful genie. We wouldn't give everyone a nuke, just as we wouldn't want a small group of unelected people to have the only nukes. Given that the capabilities of an AGI will increase with time, I don't see how democratizing AGI leads to anything but madness. I also don't see how leaving AGI in the hands of a small group of people leads to anything but madness. I guess I only see madness.

    If anyone on Earth has access to a digital god, things will not go well. Even if that god is not all-powerful, things will not go well. I don't see a massive distinction between AGI and ASI, because at some level a human brain emulated in a computer is already superintelligent. It can think faster, access the entirely of human knowledge ("the internet"), and probably replicate pretty easily. Obviously I care way less about aligning AGI as I do about aligning ASI, but I need to remind myself that they are not so far off or necessarily different. What does all of this mean in the short term? Let's take interpretability for example. If we knew exactly why a neural net made every decision, would that be a good thing? Would that create massive increases in AI capabilities and trust in AI, and lead to everyone getting a genie even sooner? Maybe not, and maybe having aligned genies is way better than having unaligned genies. But if some unaligned low-level genies start messing up and killing people, maybe we take a big step back as a society. Maybe we outlaw genies, or take a serious look at figuring out our values. If the aligned ride to AGI goes smoothly and then the first deaths occur in an abrupt human genocide, we'll be too late. Whether an ASI ends up being good for humanity or not greatly depends on the values it is following. Even if it is "aligned" to those values perfectly, things will probably go horribly wrong for most people. If you think power corrupts, wait until a small group of individuals determines the values of this Better God. This is why I am pretty hesitant about my idea to massively boost alignment research across the board. Yes, hesitant about an idea I came up with yesterday. Maybe research into corrigibility (figuring out how to turn AI off or change its values) is much more important than all other research. I really have no idea, but it is probably an important conversation to have.

Using Narrow AI to Solve Every Problem

     It is very possible that I do not yet grasp the difficultly of producing novel alignment research. It could very well be the case that true, genuine leaps of knowledge of the general relativity sort are needed, and we simply need to find the right team of Einstein's. Some people seem to think that narrow AI can't help with solving the alignment problem and that you really need something at the AGI level in order to make progress. At least that is my understanding of some of MIRI's conversations, which are completely incomprehensible. If people in the Rationalist/AI Alignment/LessWrong community talked in simple English, the past 20 years of spinning wheels could have been avoided. Anyways, they seem to think this: by the time you have an AGI powerful enough to solve alignment, you have an AGI powerful enough to wreck a whole lot of things (including the human race). Well yeah maybe if "solve" is your goal, but even "assist with solving" is met with steep resistance. I can't possibly see how this is the case. Large language models such as GPT-4 are insanely good at providing sources, explaining complex topics, and programming. During one of my discussions with a head of an AI research lab, I was told that one of the main bottlenecks of research is all the administrative work. Well, if hiring a thousand more workers would be beneficial (as they could help format, write summaries, check plagiarism, compile sources, test code, etc.), is it not the case that hiring ten employees that are skilled at using advanced LLM's would be just as beneficial?

    I have been using ChatGPT extensively, and it is clearly one of the greatest technological achievements of the past century. It is insanely useful in every aspect of my work life, and it is very clearly going to replace a lot of white collar jobs. What are alignment researchers doing that ChatGPT can not? Or, what are they doing that could actually not benefit from such a incredible resource? It seems that the coming wave of narrow AI, including the generative AI systems that keep exploding in usefulness, is going to transform nearly every industry. Medicine, finance, technology, journalism, I could go on, will be massively transformed and improved. So many use cases: cancer scans, fiction writing, translations, virtual assistants, even relationship advice and therapy. Why are people so convinced alignment research his the sole holdout? I think it sort of ties back to this strange savior complex. The idea that only a small subset of people truly know this battle between good and evil is happening, and only this small subset is smart and moral enough to take on this inevitably losing battle (so that they can say "I told you so)." It all seems so weird. Obviously we are not going to code first-principles moral values into a machine. Godel's theorem and the god debate are clear on this (we have to assume some values and we have no idea what the correct values are). But for things like interpretability and corrigibility is that really something only humans should be working on?

    Narrow AI is probably pretty good at assisting with most effective altruism causes and most existential risk prevention. Obviously it can lead to terrible outcomes, but engineering plant based meat substitutes (a research heavy field) and fighting global poverty (another research heavy field) can be positively impacted by simply giving every researcher an awesome assistant that can scan the internet and code better than the best human alive. Narrow AI is going to become increasingly used to solve every problem. Why ignore it for the most important one?

Monday, April 17, 2023

We Need to Speed Up, Not Slow Down

     At the moment, there is a lot of discussion about putting a pause on AI capabilities research. An open letter from the Future of Life institute has been signed by thousands of researchers, urging a 6 month pause on the training of models more intelligent than GPT-4. I would love for this to happen, as then society could take more time to absorb the impact of such a large technological shock. We will have more time to debate, discuss, and regulate. However, this is obviously an empty gesture. Someone with a tremendous ego and even more impressive lack of character will simply sign the letter and then immediately start their own AI lab focused on creating an AGI. His name is Elon Musk. China is not going to slow their progress, which means that the U.S. government has no incentive to either. If GPT-4 is a calculator than Bard is a bundle of sticks, so there is no shot that Google is going to really sit on the sidelines for six months. What people fail to realize is what I stated in a previous post: the first trillionaire will be someone who owns a very large stake in an AI development company.

    The financial incentive to build AGI is not only enormous, it is the highest financial incentive we have ever seen and possibly the highest financial incentive we will ever see again. This will be an arms race to the finish no matter what the talking heads say on the television or in congress. I vehemently disagree with the idea that we should spend our time campaigning to slow down capabilities research. It is simply not possible. The financial incentives are too massive, and anyone who would actually follow an order to halt progress, an idea that is completely unverifiable and ungovernable, is probably a more upstanding person who would thus leave the development in the arms of less ethical people. I understand that there is probably a "good guy with a gun" fallacy here, but I really don't see why we should trust anyone to act against their own self interest. Instead of this, we should be massively boosting alignment research. 

    Since there is a lot of overlap between alignment and capabilities research (an aligned system is actually more capable or will appear more trustworthy and given greater responsibility even if there are existential flaws), we should focus on long-term value alignment. I could not care less about solving interpretability or distributional shift. Someone else is either going to do this or not, and there is actually a massive financial incentive in each case. Also, if we knew why a neural net made every decision, I am not sure if that would be good or bad for humanity at this point. The question we should ask is: "where is there is not a massive financial incentive?" Some sort of long-term value alignment, sure. The kind of "shoot for the moon" research that will only be beneficial if we hit AGI and go "oh wow looks like superintelligence is pretty much imminent and we have no idea what we are doing." We should be spending trillions of dollars on this sort of research, not zero.

Saturday, April 15, 2023

Should We Build a Better God?

    God comes to you in a dream, and says "hey, next Tuesday I will cease to exist. On Wednesday, you are to design my replacement. You will choose the New God's moral principles and decide how active the New God will be in the life of future humans. On Thursday, the New God will take over and you will have no say in whatever happens next." Do you take up the offer? Do you tell him "actually you should make this all democratic, and the public should vote on each aspect of the New God." Do you say "actually I think Sam Altman would be better than me at designing a New God, you should ask him." This is essentially the dilemma we have with ASI.

    Before we getting into choosing values, let's briefly discuss an even harder problem, ensuring that the New God follows through with our intentions. We have to be very careful what we wish for. In a short story called "The Monkey's Paw," a man is granted three wishes. The man first wishes for $200, and then the next day his son dies in a work accident and the family is compensated $200 by the son's company. Some folks at MIRI think that "figuring out how to aim AI at all is harder than figuring out where to aim it," and I'm actually inclined to agree. Both are insanely hard, but trying to incorporate any sort of value system in machine code seems near impossible. This is going to be the most important technical aspect of alignment research, but let's get back to discussing the choosing of values. Frankly, the choosing of values is actually possible and more fun to talk about.

    Now, who should choose? Do we want the vote on the New God's moral beliefs to be based on a vote across the United States? Should it be worldwide, where the population of China and India dominate the vote? Should citizens of authoritarian regimes get a vote? Should members of the Taliban? I honestly don't see how this differs much from the Constitutional Convention. We should probably have something similar for ASI, a conference among nations where we decide how humanity will design the value system of a future ASI. Some of the solutions from the Constitutional Convention will probably be applied. Maybe there are some votes based on pure population and some votes granted to specific countries or regions, similar to how the US has a House of Representatives (number of politicians based on population of the state) and a Senate (two candidates per state). Frankly, this doesn't seem to different than what would be necessary for the formation of a world government.

    A world government is a simple solution to curbing existential risk. It's harder to have nuclear war if there's only only country, and it will be easy collaborate to make worldwide decisions if there is only one government. Assuming this government is largely democratic, it is probably the only feasible way to account for humanity's aggregated moral principles and future desires. There are obviously huge risks of a world government (authoritarianism, value lock in), but it is very possible that it will be established in the future. If ASI is developed, it's going to pretty much take the role that a world government would anyways, as it will be insanely powerful and an individual human will have essentially no sway over anything. A world government and ASI face the same Democracy vs Educated Leaders trade off. There are two options when building a better God:

1. Make this process totally democratic, so that each individual currently on Earth gets a say.

2. A small team of experts gets to decide the future of humanity.

    Maybe this small team is better than the rest of the world at picking between trade offs. Maybe they are more educated, more moral, and better at determining the needs of the future humans who have yet to be born. Or maybe they are authoritarian and will be massively incentivized to achieve god-status and immortality themselves. Regardless, I actually do think we should establish a New Constitutional Convention. Call it the Building a Better God convention. Maybe a majority of the population opposes this creation, and in that case we will have our answer.

Friday, April 14, 2023

The First Trillionaire

   If you are investing for the future, you need to have some sort of prediction on how AI will develop over time. If you believe in short timelines and that AGI will arrive in the next ten years, you probably shouldn't invest in Sears or Walmart. Maybe it is the case that some small, private AI lab will develop AGI and quickly become the real superpower of the world, but it is also likely that a large tech giant acquires the small lab and scales up the innovation. Given that compute seems to be a constraining resource, you should probably invest in companies with the scale to train these massive models.

    Should you invest in AI safety firms? Or tech companies with lots of AI safety standards? Probably not, as they will likely move slower than some of the "move fast and break things and maybe kill all of humanity in the process" firms that are bound to spring up. Still, I see AI capabilities research and AI alignment research as two sides of a similar coin, so it could be the case that the companies more focused on safety could create better products. Maybe these products conform more to consumer expectations, maybe they meet new regulations, or maybe consumers are less scared of them.

    If AGI actually arrives, we could see massive GDP increases across the world. It probably doesn't matter what you are invested in as long as you are broadly diversified, if the stock market quadruples in three months. More likely in my opinion is a small subset of individuals receive all the money and power, as only the actual owners of the AGI become trillionaires. In my mind it is clear that the first trillionaire will be someone who owns a very large stake in an AI development company. The real question is, will there be a second?

How Important are Humans?

    When I was an undergraduate, I worked part time as a cashier at a convenience store on campus. This job wasn't particularly exciting, I spent a lot of time doing sudoku puzzles and secretly studying, but it funded my weekends and summers. The job of a cashier is a simple loop function. For each item the customer has, scan the item and then place it into a bag. At the end of this, the customer swipes their credit card and pays for the items. Then, the cashier looks the customer in the eye and says "have a good one." The hardest part of the job is avoiding any sort of social awkwardness. 

    Self-checkout systems have replaced many cashiers. Soon, stores will likely rely on computer vision and many items won't even need bar codes (why does the box of Frosted Flakes need a barcode? The camera can just see the product and debit its monetary value to from your shopping account when you leave). The real question is, would I prefer to have a human or an AI responsible for checking me out of a grocery store? Honestly, I do not care at all either way. If every single cashier was replaced by an AI that used computer vision to run this same loop function (for each item assign the cash value to a receipt that must be paid) that would be fine with me. Why is this important? Because we must determine where exactly humans will fit when AI takes over 90% of the jobs.

    I had an argument recently about AI art. The opposing claim was the AI art will never really be valuable. The Mona Lisa is valuable not because it is particularly stunning, but because of the human artist and human vision behind the work. I had another argument about generative AI writing books. The opposing claim was the people would pay a premium for books written by humans, as a book written by an AI doesn't have the same artistic vision/meaning. As an example, we watch humans play chess and would never care about AIs playing chess. Here is the problem with all of this: if an AI writes a book that is substantially better than what humans are putting out, I am buying that book. Even if AI is only in the top 1% of human authors in terms of quality, I am reading that book before I am reading the 99% of human authors. Some people will pay for American-made products, but most default to the cheapest option made in China. If Germany makes amazing cars using an automated assembly line, the consumer will probably buy that instead of buying less amazing cars made by a team of humans. Yes, the social aspect of human to human interaction is important, but behind the lens of a screen, we will soon scarcely be able to tell. AI will have the capability to be extremely nice and incredibly helpful, better in most ways than the grumpy college cashier just there for the paycheck.

    I think a lot of people miss the point of generative AI. Yes, maybe people will have a preference to read poetry written by humans. Given that AI will probably become really, really good and human-like at writing poems, how will we even know if the human that claims to write poetry doesn't just use AI? Also, if the AI poetry is absolutely beautiful, why would I ever read a human's work again? Personally, if AI starts creating sequels to amazing human movies that I know and love and these films are in the same artistic style and just as high quality, I may never watch a human made movie again. Why would I?

Mind Crime: Part 10

    Standing atop the grave of humanity, smugly looking down, and saying "I told you so," is just as worthless as having done noth...