Sunday, March 5, 2023

The Impossibility of Human Flourishing

    Should individual companies be allowed to construct and distribute super intelligent systems for personal use? How much weight should be placed on doomsday scenarios, and would it be better for the government, with all its ineffectiveness and tyranny-potential, be a better substitute? It seems to me that a small group of highly moral, highly motivated persons who lack a strong profit incentive would be a better kick starter for AGI than some sort of world government. The problem is, I am not sure if this group is possible. The profit motive may not be bad, but it will ensure that there is an incentive to cut corners and reduce safety protocols in the name of progress. Also, various business and anti-trust dilemmas emerge, and regulation tends to be the only saving grace from drug dealer like competition. The first company to develop AGI could be the last. Given this, it is important to get it right initially, or at least within the first few months. I am not quite sure why AGI hasn’t been developed yet. I assume it must mean that humans are pretty stupid, given that the human brain evolved from a massively wasteful process focused not on intelligence, but rather survival and reproduction. A lot of this evolution was random, and there are plenty of flaws in the human body and brain. I’ve had people tell me that we simply don’t have enough compute to properly simulate a brain, but given the scope of processing power of the internet and the relatively lackluster power of an individual brain, I doubt that this this will remain the case for long.

    Our progress as a species has become more rapid, and I would be surprised if AGI wasn’t developed in the next century or two. We are simply too smart to forever have this roadblock, and an individual human is simply too dumb. What I am saying is simple, the human brain cannot be exponentially smarter than that of a monkey. If that were the case, we wouldn’t suffer from such a great overlap in suffering potential. The human brain is advanced, but only to the mind of a close relative to a monkey. It seems unlikely that we are not able to collectively, across eight billion minds, create a single mind spread across trillions of terabytes of processing power and compute. If we can simulate one mind, many more are likely to follow. Artificial intelligence is likely the most important technology in human history. A century ago, it was the nuclear bomb. We have not solved the problem of nuclear proliferation, and every moment lays a button click away from near-total annihilation. How this is not a daily, crippling thought to everyone is a testament to the power of compartmentalization. Given that humans are constantly on the brink of nuclear war, with the only defense being mutually assured destruction, I am not sure why we are confident the same will not happen with AI.

    AGI does not ensure any sort of mutually assured destruction. Unlike the nuclear weapon, the first country to control AGI will likely be the first to develop super intelligent AI. If this super intelligent AI is created with the “wrong” values, will that not ensure complete dominance? A superintelligent AI will probably be able to halt the progress of other AI, whether out of self-preservation or out of instruction from a puppet master. I doubt that this will take the form of killer robots, but rather spoofing. An ASI will likely be able to convince other countries that AGI is impossible, or perhaps decades away. I am not entirely confident that ASI is absent from the world, although it is impossible to prove a negative. I do think every additional advancement in AI makes it more likely that ASI is already possible. It is almost like how finding microorganisms on Mars could be worrisome, as it means there is one less great filter to worry about. The Fermi paradox can probably be applied to AI. If ASI hasn’t yet been developed, why is that? One reason could be that AGI is simply far away, and we lack the algorithms and compute at this  point to create it. This logic must extend, to state that at some point in the future (absent some existential event) humanity will create AGI. The arguments against AGI have clearly been based on the “god of the gaps” fallacy, and given the developments in the past five years a lot of them look just as ridiculous as the arguments against the usefulness of the internet. The goalposts will continue to move, but more and more people are waking up to reality. As these roadblocks are knocked down, it becomes increasing likely that ASI already exists. It could be argued that a sudden stagnation in AGI progress could actually signal the development of ASI, as this superintelligence could be preventing any sort of detection or competition. If ASI currently exists, it is possible we will never know. This day may be our last, or our memories could be false. Regardless, I think we should stick with the assumption that AGI and ASI have not yet been created, but are shockingly near.

    ASI goal alignment is an interesting topic. On one hand, I think that making an ASI compatible with human values is extremely important. However, what exactly are human values? If ASI were to take the moral philosophy of millions of academics, it would land on some form of moral relativism. Should we ensure that ASI is not a nihilist? In some forms, a utilitarian ASI could actually contribute more to human suffering than a Cioran-like ASI that refuses to do any sort of work out of protest. How do we ensure that the utilitarianism that ASI pursues is the right calibration? Should we use ASI to try to determine the best set of objectively moral values? As an avid reader of philosophy, I am extremely worried that nihilism is actually true. If this is the case, it probably doesn’t matter if ASI takes over humanity and tortures us for near-eternity. However, there is some sort of Pascal’s wager argument here, even if it faces the same problems as religious belief. 

    I think that ASI is probably humanity’s best chance at finding the correct moral system. In this regard, ASI could be infinitely useful. We probably won’t know if the moral system an ASI develops is correct, but I’m not entirely sure we will have any sort of compelling competing choice. The moral beauty of some works of fiction mirror the best parts of religious teachings, so I am quite sure an ASI will at least be able to make its moral system entirely convincing to us. Maybe, killing humans is morally right, and the ASI will actually be doing something objectively good. Regardless, we should ponder whether we want to align an ASI with human values, or if we want it to align itself with the true objective moral values of the universe. To be honest, I am not sure which one of those is harder.

No comments:

Post a Comment

Mind Crime: Part 10

    Standing atop the grave of humanity, smugly looking down, and saying "I told you so," is just as worthless as having done noth...