Sunday, March 15, 2026

Jobs are the Least of Our Problems

     It's pretty interesting to see the consensus change around jobs, and to see just how much the general public hates AI. I actually hope this changes, because AI can bring such important transformative changes to the world. As mass layoffs begin to happen, my guess is the public will over index on this eventuality (massive unemployment armageddon), and possibly miss any critical discussion of greater long-term risks or benefits. We may be totally wrapped up in personal financial crises to contemplate all getting wrecked by the superintelligent deity, or we may be so stressed about getting that UBI payment that we campaign against technological advancements that could lead to rapid scientific discovery and eventual utopia. I tend to think alignment is probably really hard (technically) and the governance concerns seem near-insurmountable, but we really have no idea yet and it might be a coin flip. And we might miss the meaning of both sides of the coin if unemployment is at 20%, and be unwilling and unable to make the correct trade-off.

We're Still Early

     I constantly think that I am "late" to things in the AI Safety/EA space. Meaning people start talking about AI welfare, and I think "wow, crazy I was a couple of years early to that but now people have caught up." Then, I talk to my friends from Chicago, or other "normies", and their worldview hasn't changed since 2020. AI is a hyped up nonsense, or is maybe important but will take an extremely long time to diffuse. In terms of actual people, the Dario worldview is still on the crazy/bleeding edge, and is almost entirely centralized in the smallest bubble of a few thousand people in San Francisco. It's why I'm here, and why I find myself entirely unable to leave until this ASI stuff is fully sorted.

Sunday, March 8, 2026

Software Maximalist

     Why is the human brain so hard to replicate? It doesn't make any sense. We are dumb monkeys, and there are billions of us, and we are throwing the world economy and our smartest minds at the simple task of trying to just replicate the intellectual ability of a single human in a computer. And we haven't done it. Despite having LLMs that can solve some of the hardest math problems, we still can't match the performance of a small child on various important reasoning tasks. This seems crazy, and it has to change soon, doesn't it?

    I am not convinced we need insanely large data centers to make this work. I am a software maximalist, not a hardware one. A human brain is a bunch of cells mushed together, and it weighs only three pounds. An elephant brain weighs ten pounds, at least. Despite this, humans can build space ships that fly to the moon, and elephants are of comparable intelligence to an octopus. It does not seem to be the number of neurons, but rather their shape and interconnectivity. Scale seems to matter quite a lot, it is hard to imagine a superintelligent fly, but the actual software seems to matter as well (in addition to the way the hardware is organized). There may be enough latent capability in our current GPU clusters to pave the way for billions of geniuses. We don't know this for sure, but everyone seems much too confident in assuming away the possibility.

Moral Value is Neural

     All moral value is derived from biological neural connections. Nothing else. It seems pretty clear that because of this, we should be very sensitive about what we use such connections for. Everything that interacts with the physical world is based within it, at least to our knowledge, so we should be very worried about how we treat these components of entities that seem to have subjective self experience. Qualia is all that matters, nothing else. Where does qualia come from? Where does pain, pleasure, or the "experience" of achieving goals stem from? Where are those decisions hatched in the first place? Well, it's clearly the brain.

    As a result, we should be very protective of neurons. Any form of biological neuron, human or otherwise, should be treated with unreserved sanctity. These are the building blocks of moral states, our subjective experience, and possibly everything that could possibly matter. We should great carefully and wisely, and probably not create incentives for widespread suffering where such important components are simply a means to an end.

AI Positivity

     I didn't like Machines of Loving Grace  that much, because it felt like Dario was responding to the criticism that he was a "d...