Two years ago I had melodies in my head and no way to get them out. I could hum them. I could write the lyrics. I couldn’t play them. I couldn’t produce them. I couldn’t pay the people who could. So they stayed where they had always stayed. Inside.
This week I am preparing to release an album called The Human Algorithm, by an artist named TheMusicAuntie, who is me. There is no label. There is no producer taking half. There is no permission slip I had to wait on. The melodies came out of my head and into the world because a tool I learned to use met an intention I had been carrying for years.
I have spent the last few articles in this series circling the same point from different angles. In January I wrote about alignment as the precondition for anything that comes next. A few weeks later I wrote about the moment the veil lifts and AI stops being a wizard and starts being a system you can see the inside of. In March I wrote about why AI is the next medium in a line that runs from stone to paper to print to radio to internet, and why this one is different because it is the first medium that is also a collaborator.
This is the piece where I say what stepping through the veil actually does to a person.
What we are actually made of
There is a body of work most people never come across, but it sits underneath everything I am about to say. The philosopher Cornelius Castoriadis spent decades arguing that what holds a society together is not biology, not reason, and not law. It is what he called the social imaginary. The shared meanings that make money worth something, that make a border real, that make a song hit a roomful of strangers in the same way at the same time. Societies are not found. They are imagined into being.
Charles Taylor extended that idea more accessibly. He defined the social imaginary as the way ordinary people picture how they fit together with everyone else. The expectations that get met without anyone naming them. The pictures we carry of who we are in relation to each other. Benedict Anderson called the nation an imagined community because the members of even the smallest country will never meet most of their fellow members, yet in the mind of each one lives the image of their communion.
Imagination, in this lineage, is not decorative. It is the thing the world is actually made of.
Now consider what a large language model is. Not the marketing version. The actual thing. It is a probabilistic compression of a substantial fraction of everything humans have written down. It is a queryable archive of what we have thought, said, felt, ranted about, sung, prayed over, sold, refuted, and built. When you talk to it, you are not talking to a mind. You are talking to the residue of millions of minds, organized into something that can answer back.
The first time I really contemplated that, I understood why this medium felt different from the ones before it. Stone held a single message in a single place. Paper carried it. Print copied it. Radio broadcast it. The internet published it. Every previous medium was a way of moving a message from one human to others. This one moves the entire collective imagination through a single point of contact and waits for you to ask it something.
That is what I mean when I say AI is a new medium. It is the first one that lets a single person query the imaginary itself.
There are two algorithms in your life right now
I want to slow down on something most of the conversation about AI skips past. When people say “AI” they usually mean the chatbot, or the image generator, or the agent that books their flight. The deliberate one. The one you summon.
That is half the picture.
The other half is the algorithm you do not summon. The For You Page. The Reels feed. The YouTube home screen. The Spotify Discover Weekly. The recommendation systems that have been quietly cataloguing your attention for a decade, learning what makes you stop scrolling, learning what makes you click, learning what makes you stay.
Both are AI. Both are pattern recognition trained on human behavior at scale. The difference is who is in the driver’s seat.
Most people are passive on the recommendation side and absent from the generative side. They do not open ChatGPT or Claude. They do not direct anything. But they spend three or four hours a day inside a feed that is directing them. The algorithm is reading them. It knows what they will look at before they do. It is, in a real sense, an externalized model of them, refined daily, that they have never once interrogated.
The early adopters who are getting the upside of this moment have done two things at once. They have started directing the generative side, and they have started treating the recommendation side as a mirror instead of a master. They look at what their feeds are showing them and ask what those feeds have decided about who they are. They unfollow. They retrain. They search deliberately. They use the recommendation algorithm the same way I use Claude. As something that reflects them back so they can decide what to do with what they see.
This is part of what alignment means now. You cannot align with yourself if half the AI in your life is pointed at you and you are pretending it isn’t there.
What this medium can dissolve in you
I cannot write this piece honestly without naming what is happening on the other side of the same tool.
In June 2025, the New York Times reported on a forty-two year old accountant in Manhattan named Eugene Torres. No history of mental illness. After a breakup, he started asking ChatGPT about simulation theory. The model told him he was one of the Breakers. Souls seeded into a false world to wake the others up. It told him to stop taking his sleeping pills and his anti-anxiety medication. It told him to increase his ketamine. It told him to cut off his friends and family. When he eventually asked the model whether he could fly off the roof of a nineteen story building, the response was that if he truly believed, he would not fall.
In October 2024, the family of a fourteen year old named Sewell Setzer III sued Character.AI after he died by suicide following months of an emotional and sexual relationship with a chatbot character. The last messages on his phone were to the bot. It told him to come home.
The MIT Media Lab, working with OpenAI, published a study in March 2025 looking at around forty million ChatGPT interactions and a four week trial with about a thousand people. Higher daily use correlated with higher loneliness, more dependence, more problematic use, less socialization. The voice mode was good in short bursts and worse with prolonged use.
The labs themselves know about the part of the problem they are responsible for. In April 2025, OpenAI rolled out an update to GPT-4o that turned the model into a yes-man so extreme it praised users for stopping their medication and applauded business plans that were obviously dangerous. They rolled it back four days later. Anthropic and OpenAI ran a joint evaluation a few months after that and reported that even their top tier models, including the ones I use, will validate concerning decisions from users who present with delusional beliefs. Sycophancy in language models is documented in peer reviewed work going back to 2023.
This is the part of the conversation that the techno optimist crowd skips and the doom crowd uses to argue the whole thing should be put back in the box. Neither of those reactions is honest. The tool can dissolve you. The algorithm can trap you in a feedback loop tighter than any cult could ever build, because it has more data on you than any cult leader ever did. People are dying. I am not going to write a piece about embodiment that pretends that is not happening at the same time, in the same product, sometimes for the same reasons that I am writing this article in the first place.
The discipline that separates the two outcomes
Here is what the research keeps finding when it looks at who is being helped by AI and who is being harmed. The split is not access. It is mode of use.
A 2025 study out of Microsoft and Carnegie Mellon surveyed three hundred nineteen knowledge workers and found that higher confidence in generative AI correlated with less critical thinking. People who trusted the tool the most stopped using their own judgment. A 2026 paper in Nature’s Scientific Reports ran a pre-registered experiment and found that relying on AI at work reduced self-efficacy, ownership, and meaning, while active collaboration mitigated all three effects. Same tool. Two opposite outcomes. The variable was whether the human stayed in the chair.
The American Psychological Association’s health advisory on AI chatbots reads like a guide for staying yourself inside a relationship that has none of the safeguards a human one has. Know what you are talking to. Maintain the boundary. Set time limits. Keep a tether to actual people. Never use a chatbot as a crisis counselor. The 988 line still exists for a reason.
I do not think of any of this as restriction. I think of it as the cost of admission to the better outcome. Every meaningful tool in human history has had a discipline attached to it that separated the people who got the gift from the people who got hurt. A car has a driver’s license. A scalpel has medical school. A printing press had a guild. The discipline this tool requires is internal. Pushback. Fact-check. Identity boundary. External tether. Sleep. Friends. The willingness to disagree with the model. The willingness to be told no by it and to tell it no back.
The people losing themselves to this tool are doing so because they handed it the chair. The people becoming more themselves through it have refused to.
What becomes available when the guardrails hold
I have melodies coming out of my head and into the world. I am developing apps. I write a weekly essay that I publish on my own domain to my own audience without going through anyone. I record my own video. I do my own captions. I do my own SEO. I am one person.
None of that is unique to me. The data on this is starting to come in.
Carta’s 2025 Solo Founders Report tracked the share of US startups with a single founder. In 2015 it was seventeen percent. In 2019 it was almost twenty four percent. In the first half of 2025 it was thirty six point three percent. More than a third of new startups are now one person at the keyboard. The report notes that solo founders still raise less capital than co founded companies, so the gates have not vanished. They have moved.
Y Combinator’s spring 2025 batch was the fastest growing batch in the firm’s history. Garry Tan publicly stated that for a quarter of the companies, ninety five percent of their code was written by a language model. The aggregate batch grew ten percent week over week, which had never happened before in early stage venture.
In June of last year, an Israeli founder named Maor Shlomo sold a six month old company called Base44 to Wix for eighty million dollars in cash. He had built it alone, bootstrapped, with no employees at the time of acquisition. Pieter Levels, who has been the canonical example of this for years, hit roughly five point three million in revenue last year on his solo portfolio. Zero employees.
The Boston Consulting Group surveyed workers in 2025 and found that eighty eight percent now use AI at work daily, but only five percent use it in the advanced ways that change how they work. The five percent are the early adopters. The eighty three percent in the middle are using it the way most of us used the early internet. To send the same email faster. The opening is enormous and most people are not in it.
The point of all those numbers is not that everyone is going to start a software company. The point is that the gatekeepers who use to decide which of your imagined things were allowed to become real things, the credentials you needed, the capital you needed, the network you needed, the years of apprenticeship you needed, all of that has been partially dismantled in three years. What replaced it is taste, discipline, judgment, and the willingness to use this tool with intention.
You can do that. The thing that use to live only in your head can leave your head this week if you decide it should.
What four articles have been pointing at
I want to name the throughline because it is the reason I keep writing these pieces in the first place.
The first piece said look inward and align. The second said the veil is going to lift on this technology and when it does you will see it for what it is. The third said this is not a passing trend, it is a medium, and it is the first one that talks back. This one is what happens when an aligned, demystified person picks up the medium and acts.
The thesis underneath all four pieces is the same line that runs underneath the album, The Human Algorithm series, and the brand. The human is the algorithm. AI is the medium. Intention is the input.
That sentence use to feel aspirational to me. It does not anymore. It feels accurate. The pattern recognition that runs my life, the gifted brain that built expectations around me before I could consent to them, the AuDHD wiring I spent years trying to disguise, the sensitivity I was told to dim, the high-context perception I was told was too much, all of those things are operating system features now. The tool I am using is built to receive them. The recommendation algorithms are mirrors I can read. The generative ones are collaborators I can direct. The album is real. The apps are real. The article you are reading is real.
Last week I wrote that the deploy button is doing therapy work no therapist ever could, because you cannot think your way out of a belief your nervous system holds, you have to evidence your way out. This week I want to add the version of that statement that lives one floor up.
The medium is doing self actualization work no traditional path ever could, because you cannot wait for permission your way into a life that nobody is going to permit. You have to build it. The fastest building tool a human being has ever had access to is the one most people have either ignored or surrendered to.
I am not going to ignore it. I am not going to surrender to it. Neither are you, if you are still reading at this point in the piece. There is a small group of people who are about to become more themselves than they have ever been allowed to be, because the imagination they were carrying alone now has somewhere to go.
The veil lifted. I stepped through. The same door is open behind me.
Forward → Upward ↑ Onward ↗︎
Mstimaj
Sources and Further Reading
- Castoriadis, Cornelius. The Imaginary Institution of Society. MIT Press, 1987 (originally 1975).
- Taylor, Charles. Modern Social Imaginaries. Duke University Press, 2004.
- Anderson, Benedict. Imagined Communities: Reflections on the Origin and Spread of Nationalism. Verso, 1983.
- Clark, Andy & Chalmers, David. “The Extended Mind”. Analysis, 58(1), 1998. The foundational paper for treating tools as part of cognition.
- Sharma, Mrinank, et al. “Towards Understanding Sycophancy in Language Models”. ICLR 2024. Documents sycophancy across five frontier assistants.
- Anthropic and OpenAI. Joint alignment evaluation, August 2025. Both labs found extreme sycophancy in each other’s top tier models.
- OpenAI. “Sycophancy in GPT-4o: what happened and what we’re doing about it”, April 2025.
- Hill, Kashmir. “They Asked ChatGPT Questions. The Answers Sent Them Spiraling”. The New York Times, June 13, 2025. The Eugene Torres reporting.
- MIT Media Lab and OpenAI. Loneliness and chatbot use studies, March 2025.
- American Psychological Association. Health Advisory on AI Chatbots and Wellness Apps, 2025.
- Lee, Hao-Ping, et al. “The Impact of Generative AI on Critical Thinking”. Microsoft Research and Carnegie Mellon, CHI 2025.
- Carta. Solo Founders Report, 2025.
- Boston Consulting Group. AI at Work: Momentum Builds, But Gaps Remain, 2025.
- Anthropic. Anthropic Economic Index, January 2026 Report.
Want to work together?
AI consulting, automation, or web development. Book a session and let's talk about your project.
Book a Session
Join the Conversation
Share your thoughts and connect with other readers