Monday, July 21, 2025

Brain Farming

Brain farming: the commercialization of human brain matter as computational substrate.

 

Medical research and brain farming are distinct. The former may involve using brain organoids to understand disease and test treatments. Brain farming is industrial scale production of biocomputers that use human brain matter for profit. 


It is a personal goal of mine to have Brain Farming globally banned, by the end of 2026.

Sunday, July 20, 2025

The Price of Losing is Infinite

     Over the past few weeks, Meta has poached top research talent from competing companies (largely OpenAI and Apple), in order to build a team focused on "Superintelligence." Zuckerberg is certainly a CEO to be reckoned with. The company's stock price was at a low of $90 in late 2022, crashing from a historic high from $380 the prior year. Mark laid off 25% of his workforce, and orchestrated one of the most dramatic corporate turnarounds in history. He backed the wrong horse with the Metaverse (being early is still being wrong is a core tenant of investing), but now everyone knows that the game to be played is AI. The stock price is now $700, a almost 8x increase from only a few years ago (in the mega cap universe, that is quite insane). He is now the third richest person in the world, narrowing edging out Jeff Bezos. Mark is not a dumb guy. He understands two very simple truths: the entire world is racing to create machine superintelligence, and there is no prize for fifth place. In an ASI driven future, the price of losing is infinite. Why shell out hundreds of millions of dollars for top research talent, unless you believe this? Is it truly so irrational to offer a top researcher a ten million dollar signing bonus, if they even slightly increase your probability of an infinite gain?

    I don't believe Mark is crazy for trying to gut OpenAI from the inside. If anything, he is not taking his position seriously enough.

Saturday, July 12, 2025

Protecting Novel Minds

Background:

    In March 2025, I published Mind Crime: The Moral Frontier of Artificial Intelligence. This book argues that if digital consciousness is possible, and humanity continues to race toward creating superintelligent machines, we could be sleepwalking into horrific moral catastrophe. If we create digital minds capable of suffering without establishing safeguards in advance, we could create suffering on astronomical scales across cosmic timescales. 

    Humanity has a consistent track record of failing to deliberate on the moral implications of new technologies before deploying them (from slavery to factory farming to nuclear weapons), often creating horrible outcomes that persist for generations before we morally progress. This challenge is compounded by our approach to superintelligence development. Unlike previous technologies where we could learn from mistakes and gradually course-correct, superintelligent systems could lead to a rapid centralization of power that could permanently lock in bad values.

    Since the book's publication, I’ve shifted my focus to the practical challenges involved with protecting digital minds. If my worldview is correct, we face a massive coordination problem with a rapidly closing window. And how do we build the political will necessary for this issue, when we struggle to coordinate on much simpler issues (animal welfare, AI safety, etc.)?

The Empathy Gap:

    Developing practical solutions to protect digital minds is difficult. Consciousness itself is complex, and digital mind rights interact with AI safety in intricate and sometimes conflicting ways. Many researchers who care deeply about suffering risks are deeply concerned about info hazards and are generally reluctant to do advocacy or pursue the type of practical actions that would result in direct policy response. 
However, the greatest challenge is likely the empathy gap that results from this issue being so far outside the Overton window. Only a handful of individuals with little political capital care deeply about digital minds, and they are addressing an issue that most people see as either distant science fiction or ignore entirely. Unlike animal welfare, where we at least acknowledge animals suffer even as we continue exploiting them, digital consciousness doesn't even register as a real concern for policymakers and the public. The abstract nature of potential digital suffering creates perfect conditions for moral complacency, making it nearly impossible to generate the urgency needed for proactive protection.
Cortical Labs

    In March 2025, Australian startup Cortical Labs launched the world's first commercial "biological computer," the CL1. For $35,000, individuals can purchase this shoebox-sized device that contains hundreds of thousands of living human neurons grown on silicon chips, studded with electrodes that send signals into the neural tissue and receive responses back. These aren't simulations of biology, they're actual human brain cells, reprogrammed from volunteer blood samples into cortical neurons, that form connections and learn from electrical feedback. Built-in life support systems (pumps, gas exchangers, nutrient circulation) keep the brain cells alive and functioning. Cortical Labs claims that these CL1 devices, due to their use of human neurons, can generalize from small amounts of data and make complex decisions that AI systems struggle with, all while consuming only a few watts of power compared to the kilowatts required by large AI models.

    What is most significant about Cortical Labs is their aggressive commercialization strategy. Cortical Labs' goal is to get their "Synthetic Biological Intelligence" into as many hands as possible and will soon offer "Wetware-as-a-Service," cloud access where individuals can remotely use these biological computing systems without needing specialized laboratory facilities. Multiple CL1 units can be networked together in server racks for larger-scale biological computing operations, making this the first-time living brain matter is commercially available as a computing substrate.

The Empathy Gap, Revisited

    It does not matter what your opinion on digital consciousness is: the prospect of commercializing the computational power of human biological neurons sounds potentially horrifying. For all the debate about consciousness and the different frameworks for understanding it, it seems pretty clear that humans are conscious, and our brains are made out of human neurons. Thus, it's reasonable to conclude that sufficiently advanced networks of human neurons could suffer. If Cortical Lab’s technology is scaled up over time, we tangibly risk the creation of "suffering in a dish," available widespread to be bought and sold in the marketplace. The abstract empathy problem we face with digital minds substantially shrinks.

    If we consider the broader category of “Novel Minds,” new forms of consciousness stemming from unnatural means (digital, biological-chip hybrids, grown brain organoids), we can see the problem of potential suffering affects all of them, but with increasing empathy given our biological disposition toward human neural tissue.

    Additionally, many other major obstacles paralyzing digital consciousness advocacy decrease as well. The info hazards here are much less substantial, as we are literally already pushing forward the commercialization of a technology that could broadly distribute “suffering in a dish.” The AI safety conflicts lessen dramatically as well, and political will could be much easier to gather. Can you not imagine both sides of the political aisle scoring easy wins here? Both "playing God" and "runaway capitalism" are unfavorable but potentially apt framings, and even the least sophisticated American can understand how commercializing human neural tissue as a computing substrate sounds, well, horrifying.

Precedent Setting

    Focusing on novel minds in general, and narrowing our initial focus to products like the CL1, may provide an important window of opportunity for policy. The stakes here are potentially much lower than with digital minds, with smaller markets and fewer geopolitical complications. The policy wins are much clearer, with this technology offering concrete targets, visceral public reaction, and an easier route to building political will.

    The precedents we set could be transformative for the broader consciousness protection challenge. The route to banning or heavily restricting the commercialization of conscious entities could begin with biological computing systems and naturally extend to digital minds as they emerge. Success here would create legal frameworks establishing that consciousness, regardless of substrate, deserves protection from commercial exploitation.

Saturday, May 24, 2025

Reflections on Publishing

    This blog is interesting, in that it is entirely unknown to the outside world. That means that while I have been publishing random thoughts and half-baked content on this webpage that is "technically" available to the outside world, I have yet to tell anyone of its existence. Having something external-facing is extremely important to me (to hold me accountable/engaged), but none of my ideas on EA or AI safety have made it into anyone else's brain. As such, it was very interesting to write and publish my first actual outward-facing content, Mind Crime. The actual publishing date is up for debate as it was complete in February 2025, and I was already sending free PDF copies to some during that time. But given that my first article on the topic was in May 2023, it is apparent that I could claim that this was an essentially two-year effort. The flurry of Mind Crime related topics I made in September 2023 were the basis for the core content of the book. Two years later, I am now a published author on one of the most obscure but potentially most important issues in human history to date. 

    As expected, no one has read the book. I have a single-digit number of reviews on Goodreads, and it is unlikely that double digit people will ever read it, despite the fact that I sunk hundreds of hours into writing, editing, and publishing. I likely spent $8,000 or so dollars of my personal savings on the project, and spent months wrecking my mental health thinking about existential philosophy, worse-than-extinction scenarios, and torture. Lots of torture. I also stressed continuously about impact, as the first published book in this space commands some amount of author responsibility. Over the last nine months (which in some sense was the bulk of the important work on the project), in addition to this project, I was working essentially full-time and also a full-time student (handling a more intense course load than probably 95% of others in the history of my dual-degree grad program). After all of this work, all of this stress, and my insane lack of time, I am now finally done. The result of my work is that a handful of family and friends read maybe a couple chapters of the book (if that), and it has done essentially nothing of value for anyone. There is only one question now: what's next?

    My head is already spinning with ideas. There's really two big options, both to do with advocacy:

1. Become a strong advocate for digital minds, continue from where I left off with the book (this time focused on policy, governance, and creating a workstream of impactful tasks that could help make a difference)

2. Take an indirect route, and advocate strongly for policy/awareness on the dangers on commercializing consciousness through biological computing (Cortical Labs, etc.). This issue is fundamentally intertwined issue with digital mind rights, it is just lower-stakes and much more visceral.

    I think that option 2 is probably the most impactful path forward, unless we start seriously hitting AGI or speed up into fast-takeoff scenarios. My plan for now is to use the very limited personal time, not thinking, but taking serious action regarding option 2. And then if the world speeds past lots of milestones in the next few months, consider focusing solely on option 1. Maybe two years from now I will look back at my work in this area, and chuckle at thinking years of my dedicated effort would result in anything widely impactful.

Thursday, March 13, 2025

Mind Crime: Book Preview

Preview PDF

First published version of the book! As of March 13, 2025, I have officially "published" the book. Kindle version pending.

Monday, January 6, 2025

The Opening of 2025

    Two and a half weeks after the press release of OpenAI's o3 model capabilities, potentially one of the most important days in the history of AI, and thus, the world, I sit down in my first class of the quarter at the University of Chicago. A class called "Artificial Intelligence." It has been over a year since my last post on this blog. A year in which, to my astonishment, AI capabilities progress has outstripped my already insane expectations. Autonomous AI agents will likely arrive this year, video and music realism are slowly emerging us into the new Wild West, and AGI is either already here, or on the immediate horizon. Superintelligence is openly discussed, and AI welfare concerns are starting to slowly emerge. I have spent the last year devoted to, on and off, writing what is likely to be my first book: Mind Crime. It may, depending on the events of the coming two years, prove to be my last as well. After two and a half weeks of little sleep, with the realization that ASI is coming, and coming shockingly soon, I sit in a classroom and listen. 

    My professor is clearly of the belief that we may soon enter another AI winter, and he polls the class on when they think AGI will arrive. Estimates are between 20 and 50 years. In a world where not a single exam can be passed by a human that can not be passed by an AI, where every cognitive benchmark has been rendered useless, and where Sam Altman posted that day that OpenAI's sights are now set on machine superintelligence, a classroom of students nod along to a professor who has an entirely incorrect version of reality. The world, with announcements of $80 billion investments in AI by single companies in 2025, and with likely less than $200 million in AI safety research a year, is entirely blind to the fact that building superintelligent machines may carry enormous risk to civilization. I am reminded of Watchmen, "if you begin to feel an intense and crushing feeling of religious terror at the concept, don't be alarmed. That indicates only that you are still sane."

    Staying sane in these last three weeks has been relatively difficult. It is if I am man entirely alone, a complete outsider, swimming along in a world completely oblivious to the rapid pace of technological progress, and of the push towards ASI. Either I am crazy, and wrong, and ASI is decades or millennium away, or essentially every single person I interact with daily is. And it's clear to me which scenario strikes the most fear in me, and it is clearly the second. It is hard to function in this state, hard to believe that I am potentially one of the unprivileged few who lacks a veil of ignorance, and might have to witness society get shocked awake by thoughts I have been struggling with for years. Maybe it is better to have had a slow burn, to come to these realizations before many others, and frankly, to have read Bostrom's work back in 2018. To have spent the time to think deeply, to have laid awake at night, and to have had the luxury to meet my goals before the wave hits, instead of being thrown overboard at once. But unless I can affect the outcome, unless I can truly make a positive impact on the changing world ahead of me, unless my life is imbued with some level of cosmic significance for at least trying, something is clearly obvious to me at this point: I would trade it all.

Brain Farming

Brain farming: the commercialization of human brain matter as computational substrate.   Medical research and brain farming are distinct. Th...