Friday, May 1, 2026

Intelligence as a Commodity

     In some sense, current AI is commoditized. A $20 subscription of one model is somewhat equivalent to the next best-in-class. An open source models (often distilled from leading models) catch up to the frontier within a year. But the value of superior intelligence in the future will be more akin to extremely intense workloads done infrequently, more akin to high-touch consulting or investment banking than software consultancy. Running a comparable model a thousand times as much as a competitor will create a market advantage, as will running a model 10% smarter even if it's twice as expensive. The idea that everyone will have access to the same intelligence is laughable, unless you believe intelligence is immune from the benefits of quality and quantity.

    Exploring every aspect of a problem is valuable. Understanding the patterns that emerge from this analysis slightly faster (or at a higher level) is valuable. A system that learns continually doesn't stop growing in value, even if it grows in price.

AI Everywhere

     Generative AI will be assisting with almost every human decision in the future. A personalized assistant that has immense context on your life, and all of your written and recorded information, will be an incredible superpower. In your ear at all times, helping you pick out products. Charting your life as it happens, able to craft an intimate biography of anyone at a moments notice. The writers of history, whether true or not, will be AI systems turning complex data from the real world into written data for future models. Just as we wonder what it would have been like to live in the ancient times, with little to no recorded history, and certainly the average life being entirely absent from recordings, we will look back on the modern day, similarly baffled.

    Right now, only the great men and women of our age have books written about them, and even social media only captures the small slice of purposeful content we allow. But in the near future, once we plug into the wondering, incredible aspects of generative assistants, there will be no where to hide. Setting aside the fact that sophisticated systems will be able to fairly accurately model our behaviors and thoughts given our backgrounds (and info gleaned from others' experiences), our behaviors will be so obviously and AI-driven that only the hermits will fade into history's unknown chapters. For the rest of us, there will be no place to hide.

Unemployment

     A three day work week actually just means unemployment is at 40%. Unless there is some mechanism from the government enforcing otherwise, the AI driven future does not result in the same pay and reduced work hours for all, it results in increased inequality. As long as the marginal value of human labor is above zero, the most qualified and productive workers in the workforce will remain employed (in their specified domain), but there will be reduced need for unskilled (which in this sense could be "white-collar-aka-unskilled") labor to the same effect.

    Why would you keep the same exact headcount, but only make people work three days a week instead of five? Given the increased productivity gains of AI, you could keep everyone employed for five days a week and have a much more productive workforce. Or if there were truly diminishing returns (for some reason), and you have the same salary expense budget, why not just keep the best 3/5 of employees for the full week and lay off the remainder?

    Unless governmental control (some flavor of socialism or communism) is enforced to maintain a fully employed workforce, there's not economic mechanism for the idealized 2-3 day work week. Either we are doing productive work, and biting the bullet that is economic theory, or we are economically obsolete. There is no cheerful middle ground, it just sounds good to say out loud.

Sunday, April 12, 2026

AI Positivity

     I didn't like Machines of Loving Grace that much, because it felt like Dario was responding to the criticism that he was a "doomer" by overcompensating. However, given recent developments, and the credible societal worries emerging, it is worth reflecting on just how incredible AI is. It's clearly the most important, most useful, and most positive-sum technology ever created. The idea that we can outsource problem solving and real-world interaction to non-humans, which can be replicated and spread to assist humans across a variety of domains, is incredible. I love technology, and there is nothing quite like spinning up Clade Code or Cowork and having your computer shave hours of labor off your workflow. Every week AI leads to a further development in science or drug discovery, and the world is rapidly becoming a powerhouse of scientific and technological insight.

    The risks are incredibly real and scary, especially once we start discussing superintelligence, but you have to be entirely disconnected from reality to not see the weight of positive potential available with AI, and the elegance and beauty around the corner in many AI-driven human futures. Ignoring this will disconnect you from the true vision of those building the machines of progress, and only with this sort of empathy can one adequately assess risk-reward. If I wasn't so ASI-pilled, and I wasn't so concerned with X and S risk, and power concentration, I would think those campaigning to pause or stop AI progress are absolutely bonkers. I think you can only really do good work in AI governance if you have the view that advancing AI could lead to an extraordinary positive for all of humanity, as to deny this is to deny both epistemic humility and reality itself.

Sunday, March 15, 2026

Jobs are the Least of Our Problems

     It's pretty interesting to see the consensus change around jobs, and to see just how much the general public hates AI. I actually hope this changes, because AI can bring such important transformative changes to the world. As mass layoffs begin to happen, my guess is the public will over index on this eventuality (massive unemployment armageddon), and possibly miss any critical discussion of greater long-term risks or benefits. We may be totally wrapped up in personal financial crises to contemplate all getting wrecked by the superintelligent deity, or we may be so stressed about getting that UBI payment that we campaign against technological advancements that could lead to rapid scientific discovery and eventual utopia. I tend to think alignment is probably really hard (technically) and the governance concerns seem near-insurmountable, but we really have no idea yet and it might be a coin flip. And we might miss the meaning of both sides of the coin if unemployment is at 20%, and be unwilling and unable to make the correct trade-off.

We're Still Early

     I constantly think that I am "late" to things in the AI Safety/EA space. Meaning people start talking about AI welfare, and I think "wow, crazy I was a couple of years early to that but now people have caught up." Then, I talk to my friends from Chicago, or other "normies", and their worldview hasn't changed since 2020. AI is a hyped up nonsense, or is maybe important but will take an extremely long time to diffuse. In terms of actual people, the Dario worldview is still on the crazy/bleeding edge, and is almost entirely centralized in the smallest bubble of a few thousand people in San Francisco. It's why I'm here, and why I find myself entirely unable to leave until this ASI stuff is fully sorted.

Sunday, March 8, 2026

Software Maximalist

     Why is the human brain so hard to replicate? It doesn't make any sense. We are dumb monkeys, and there are billions of us, and we are throwing the world economy and our smartest minds at the simple task of trying to just replicate the intellectual ability of a single human in a computer. And we haven't done it. Despite having LLMs that can solve some of the hardest math problems, we still can't match the performance of a small child on various important reasoning tasks. This seems crazy, and it has to change soon, doesn't it?

    I am not convinced we need insanely large data centers to make this work. I am a software maximalist, not a hardware one. A human brain is a bunch of cells mushed together, and it weighs only three pounds. An elephant brain weighs ten pounds, at least. Despite this, humans can build space ships that fly to the moon, and elephants are of comparable intelligence to an octopus. It does not seem to be the number of neurons, but rather their shape and interconnectivity. Scale seems to matter quite a lot, it is hard to imagine a superintelligent fly, but the actual software seems to matter as well (in addition to the way the hardware is organized). There may be enough latent capability in our current GPU clusters to pave the way for billions of geniuses. We don't know this for sure, but everyone seems much too confident in assuming away the possibility.

Intelligence as a Commodity

      In some sense, current AI is commoditized. A $20 subscription of one model is somewhat equivalent to the next best-in-class. An open s...