Tuesday, June 13, 2023

The Lone Genius

    Either humanity will solve AI alignment, or we won't. Whether we do or don't depends largely on the type of alignment ecosystem we build out beforehand. Not only is deep learning difficult, but it requires a large number of resources (algorithmic expertise, computational resources, training data) and thus a large number of human inputs. Humans will write the algorithms, humans will run the computational facilities and build the GPUs, and humans will create and clean the required training data. I would compare this creation to that of the atomic bomb in a lot of ways. You need a certain level of research progress: "hey, atoms make up everything and you can split them, which causes a chain reaction, which we can use to make a really, really big bomb that could kill thousands of people." This goes from the theoretical to applied science in a messy way: "hey we are at war with the Germans and they have really smart scientists. If we don't make this super-weapon, they will." For the atom bomb, the full industrial weight of the United States military was put behind the development of this super-weapon. And then, at some point it gets applied: "well we just dropped the weapon a couple of times and killed hundreds of thousands of people." During this process, an entire field of science (nuclear research) was involved. An entire landscape of military might was utilized. The development and testing of the bomb required an entire complex of bomb-able real estate, industrial machinery, and American workers.

    Contrast this to chemically engineered pandemics. As we saw in the 2001 anthrax attacks, a very small number of people (or a single person) can create a bioweapon. Yes, decades of research in chemistry and biology will pave the way for such weapons (please for the love of god stop publishing research on how to make vaccine-proof smallpox), but an individual terrorist, if given the right skill set, could synthesize a horribly transferable and deadly virus. Maybe some state actor vaccinates its population and then releases super-smallpox on the rest of us, but it is more likely that a single individual with a school-shooter mentality learns biology. This is something we need to protect against (again, open source is good for some software, not chemically engineered bioweapons of mass destruction).

    AGI, at this point in human history, is likely to be much more similar to nuclear weapons. The work of an entire field of researchers and an entire industry of engineers will lead to the development of AGI. Such a massive set of training data and such a large amount of compute is simply not accessible to lone individuals. There is a certain romanticization of the "lone genius." People such as Einstein who contributed massively in their field, breaking away from the standard line of thinking and jumping to revolutionary conclusions seemingly overnight. There are also the engineers with massive individual impact, such as Linus Torvalds (creator of the Linux operating system and Git). However, even these impacts are within a certain ecosystem, followed up by critical additions by their spiritual descendants. In some fields of science, a lone genius can create (Linux) or destroy (Smallpox 2.0). In the world of AI, it seems we are stuck with organizational level change. This can be a blessing, or it can be a curse. Who do you trust more, organizations (companies, governments, NGOs), or individuals (Einstein, Linus, the unknown individual who killed five people via Anthrax)?

No comments:

Post a Comment

Mind Crime: Part 10

    Standing atop the grave of humanity, smugly looking down, and saying "I told you so," is just as worthless as having done noth...