Wednesday, April 19, 2023

Using Narrow AI to Solve Every Problem

     It is very possible that I do not yet grasp the difficultly of producing novel alignment research. It could very well be the case that true, genuine leaps of knowledge of the general relativity sort are needed, and we simply need to find the right team of Einstein's. Some people seem to think that narrow AI can't help with solving the alignment problem and that you really need something at the AGI level in order to make progress. At least that is my understanding of some of MIRI's conversations, which are completely incomprehensible. If people in the Rationalist/AI Alignment/LessWrong community talked in simple English, the past 20 years of spinning wheels could have been avoided. Anyways, they seem to think this: by the time you have an AGI powerful enough to solve alignment, you have an AGI powerful enough to wreck a whole lot of things (including the human race). Well yeah maybe if "solve" is your goal, but even "assist with solving" is met with steep resistance. I can't possibly see how this is the case. Large language models such as GPT-4 are insanely good at providing sources, explaining complex topics, and programming. During one of my discussions with a head of an AI research lab, I was told that one of the main bottlenecks of research is all the administrative work. Well, if hiring a thousand more workers would be beneficial (as they could help format, write summaries, check plagiarism, compile sources, test code, etc.), is it not the case that hiring ten employees that are skilled at using advanced LLM's would be just as beneficial?

    I have been using ChatGPT extensively, and it is clearly one of the greatest technological achievements of the past century. It is insanely useful in every aspect of my work life, and it is very clearly going to replace a lot of white collar jobs. What are alignment researchers doing that ChatGPT can not? Or, what are they doing that could actually not benefit from such a incredible resource? It seems that the coming wave of narrow AI, including the generative AI systems that keep exploding in usefulness, is going to transform nearly every industry. Medicine, finance, technology, journalism, I could go on, will be massively transformed and improved. So many use cases: cancer scans, fiction writing, translations, virtual assistants, even relationship advice and therapy. Why are people so convinced alignment research his the sole holdout? I think it sort of ties back to this strange savior complex. The idea that only a small subset of people truly know this battle between good and evil is happening, and only this small subset is smart and moral enough to take on this inevitably losing battle (so that they can say "I told you so)." It all seems so weird. Obviously we are not going to code first-principles moral values into a machine. Godel's theorem and the god debate are clear on this (we have to assume some values and we have no idea what the correct values are). But for things like interpretability and corrigibility is that really something only humans should be working on?

    Narrow AI is probably pretty good at assisting with most effective altruism causes and most existential risk prevention. Obviously it can lead to terrible outcomes, but engineering plant based meat substitutes (a research heavy field) and fighting global poverty (another research heavy field) can be positively impacted by simply giving every researcher an awesome assistant that can scan the internet and code better than the best human alive. Narrow AI is going to become increasingly used to solve every problem. Why ignore it for the most important one?

No comments:

Post a Comment

Mind Crime: Part 10

    Standing atop the grave of humanity, smugly looking down, and saying "I told you so," is just as worthless as having done noth...