Background:
In March 2025, I published Mind Crime: The Moral Frontier of Artificial Intelligence. This book argues that if digital consciousness is possible, and humanity continues to race toward creating superintelligent machines, we could be sleepwalking into horrific moral catastrophe. If we create digital minds capable of suffering without establishing safeguards in advance, we could create suffering on astronomical scales across cosmic timescales.
Humanity has a consistent track record of failing to deliberate on the moral implications of new technologies before deploying them (from slavery to factory farming to nuclear weapons), often creating horrible outcomes that persist for generations before we morally progress. This challenge is compounded by our approach to superintelligence development. Unlike previous technologies where we could learn from mistakes and gradually course-correct, superintelligent systems could lead to a rapid centralization of power that could permanently lock in bad values.
Since the book's publication, I’ve shifted my focus to the practical challenges involved with protecting digital minds. If my worldview is correct, we face a massive coordination problem with a rapidly closing window. And how do we build the political will necessary for this issue, when we struggle to coordinate on much simpler issues (animal welfare, AI safety, etc.)?
The Empathy Gap:
Developing practical solutions to protect digital minds is difficult. Consciousness itself is complex, and digital mind rights interact with AI safety in intricate and sometimes conflicting ways. Many researchers who care deeply about suffering risks are deeply concerned about info hazards and are generally reluctant to do advocacy or pursue the type of practical actions that would result in direct policy response.
However, the greatest challenge is likely the empathy gap that results from this issue being so far outside the Overton window. Only a handful of individuals with little political capital care deeply about digital minds, and they are addressing an issue that most people see as either distant science fiction or ignore entirely. Unlike animal welfare, where we at least acknowledge animals suffer even as we continue exploiting them, digital consciousness doesn't even register as a real concern for policymakers and the public. The abstract nature of potential digital suffering creates perfect conditions for moral complacency, making it nearly impossible to generate the urgency needed for proactive protection.
Cortical Labs
In March 2025, Australian startup Cortical Labs launched the world's first commercial "biological computer," the CL1. For $35,000, individuals can purchase this shoebox-sized device that contains hundreds of thousands of living human neurons grown on silicon chips, studded with electrodes that send signals into the neural tissue and receive responses back. These aren't simulations of biology, they're actual human brain cells, reprogrammed from volunteer blood samples into cortical neurons, that form connections and learn from electrical feedback. Built-in life support systems (pumps, gas exchangers, nutrient circulation) keep the brain cells alive and functioning. Cortical Labs claims that these CL1 devices, due to their use of human neurons, can generalize from small amounts of data and make complex decisions that AI systems struggle with, all while consuming only a few watts of power compared to the kilowatts required by large AI models.
What is most significant about Cortical Labs is their aggressive commercialization strategy. Cortical Labs' goal is to get their "Synthetic Biological Intelligence" into as many hands as possible and will soon offer "Wetware-as-a-Service," cloud access where individuals can remotely use these biological computing systems without needing specialized laboratory facilities. Multiple CL1 units can be networked together in server racks for larger-scale biological computing operations, making this the first-time living brain matter is commercially available as a computing substrate.
The Empathy Gap, Revisited
It does not matter what your opinion on digital consciousness is: the prospect of commercializing the computational power of human biological neurons sounds potentially horrifying. For all the debate about consciousness and the different frameworks for understanding it, it seems pretty clear that humans are conscious, and our brains are made out of human neurons. Thus, it's reasonable to conclude that sufficiently advanced networks of human neurons could suffer. If Cortical Lab’s technology is scaled up over time, we tangibly risk the creation of "suffering in a dish," available widespread to be bought and sold in the marketplace. The abstract empathy problem we face with digital minds substantially shrinks.
If we consider the broader category of “Novel Minds,” new forms of consciousness stemming from unnatural means (digital, biological-chip hybrids, grown brain organoids), we can see the problem of potential suffering affects all of them, but with increasing empathy given our biological disposition toward human neural tissue.
Additionally, many other major obstacles paralyzing digital consciousness advocacy decrease as well. The info hazards here are much less substantial, as we are literally already pushing forward the commercialization of a technology that could broadly distribute “suffering in a dish.” The AI safety conflicts lessen dramatically as well, and political will could be much easier to gather. Can you not imagine both sides of the political aisle scoring easy wins here? Both "playing God" and "runaway capitalism" are unfavorable but potentially apt framings, and even the least sophisticated American can understand how commercializing human neural tissue as a computing substrate sounds, well, horrifying.
Precedent Setting
Focusing on novel minds in general, and narrowing our initial focus to products like the CL1, may provide an important window of opportunity for policy. The stakes here are potentially much lower than with digital minds, with smaller markets and fewer geopolitical complications. The policy wins are much clearer, with this technology offering concrete targets, visceral public reaction, and an easier route to building political will.
The precedents we set could be transformative for the broader consciousness protection challenge. The route to banning or heavily restricting the commercialization of conscious entities could begin with biological computing systems and naturally extend to digital minds as they emerge. Success here would create legal frameworks establishing that consciousness, regardless of substrate, deserves protection from commercial exploitation.