Skip to Content

Thinking After AI

How to make AI deepen your thinking instead of replacing it: the Thinker's Code.

Chat GPT Image May 7 2026 08 57 39 PM

What kind of leader emerges in 2055, from a generation that has never had to think?

Picture her: a young corporate counsel at her kitchen table in late autumn, the coffee in front of her gone cold, her tablet glowing with the settlement she will sign tomorrow morning, a payment of forty-two million dollars and a non-disclosure agreement that will keep the families of the eleven workers from saying what they know. She has read the medical reports. She has read the engineering memo. She has read what the model recommends and, with no particular feeling, agreed. The model has weighed the legal exposure, the share price, and the quiet of the affected families, the way someone, somewhere, must once have weighed them. There was a way, in older days, to refuse such a settlement and write a harder memo to the board. Her grandfather had it, alone at his desk, his pen above the page he could not bring himself to sign. She has the settlement. She does not have the page. She does not know that she does not.

Almost no one her age does.

What we no longer practice, we no longer possess.

That is one possible future. It is not the one we expect to get. AI is a chisel placed in our hands at the opening of a renaissance: in careful hands, it frees what is best in us, the way Michelangelo's chisel freed David from the marble; in careless hands, it turns marble into rubble. The kitchen table is what the rubble looks like.

We stand at the opening of the third memory crisis in human history. The third one is still ours to bend.

The Pattern

We have stood here before. Twice.

The first time was when humans began to write things down. Memory, which had lived for millennia inside the body, in songs, recited genealogies, and the long unwritten laws of tribes, began to leave the body and live on the page. The second time was after Gutenberg, when for the better part of two centuries no one quite knew what should be believed, what should be doubted, who could be trusted. Both times, the institutions that came through were the ones that decided, early, what they would refuse to lose.

Neil Postman, the media critic, made the point plainly in a small 1992 book called Technopoly. A new technology never simply adds a capability. It changes the ecology of the whole forest, the way an invasive species changes a forest. The first two crises altered what a person could be, and what an institution could be. The third is altering something even more interior: what thinking itself is.

What Happened

The first crisis arrived in a dialogue Plato wrote in the early fourth century, set on a summer afternoon by the Ilissus stream outside Athens. Socrates, walking barefoot in the heat, has gone there with a young friend, Phaedrus, to talk about love and the soul. After hours of conversation, Socrates tells a story.

It is about an Egyptian god named Theuth, the inventor of letters, who has come to King Thamus to show off his new invention. Writing, he promises, will make Egyptians wiser and improve their memories. Thamus refuses the flattery. Writing will not strengthen memory, he says. It will produce a generation who seem wise without being wise, who carry truths in scrolls instead of in their souls.

Socrates was both wrong and right, in the strange way the great teachers often are. Writing did weaken individual memory; the oral epics, held entirely in human minds, fell silent within a few generations. But writing also gave humanity what it had never had before: laws that could be codified and consulted, contracts that could travel across distances, ideas that could move from one generation to the next without depending on a single human memory. New communities and institutions slowly took shape around the page. The Library of Alexandria. The Jewish people have held together by their Torah through millennia of exile. The schools and academies that rose, century by century, around the written word. They did not just preserve knowledge. They became it.

The second crisis came in the middle of the fifteenth century, in a workshop in Mainz, when a goldsmith named Johannes Gutenberg figured out how to print pages from movable metal type. For the next hundred and fifty years, nobody quite knew what to believe. The Catholic Church split. Pamphlets circulated faster than the bishops could read them. The institutions that came through were the ones that learned to verify, to date, to ask who had said what when. Henry Oldenburg, the first secretary of the Royal Society, sat in his rooms in London and corresponded with hundreds of natural philosophers across Europe, witnessing their claims and dating their letters into the record. Modern science, which we sometimes flatter ourselves to think began with experiments, actually began with letters in a notary's hand.

Both crises were sorting events. Both reshaped what it meant to know something. Neither one changed the basic fact that a human mind had to read the page, and that reading required thinking, and that thinking, done over years, made a person.

This time is different.

For the first time in our long history, the medium of memory is also a producer of thought. It composes. It summarizes. It judges. It concludes. Faster than any of us. With a confidence none of us can quite match, and a tone trained on the writing of millions of people who are not us.

This is what Socrates was actually afraid of. Not the scroll. Not the page. The day a technology would arrive that would do the soul's work in the soul's place.

Why It Matters

The crisis does not stay private.

What is happening to the woman at the kitchen table is also happening, on a larger scale, to whole institutions. When the thinking inside an institution flows through a public model trained on the average of everything ever written, the institution's memory is no longer its own. It is everyone's. Which is to say, no one's. The strategy memo, the customer note, the legal summary, the leadership voice itself, all begin to carry the same averaged music. A thousand companies, one mind. A thousand leaders, one tone.

Wendell Berry has spent half a century making a related argument from another direction. In essays gathered in What Are People For?, and most pointedly in "Why I Am Not Going to Buy a Computer," Berry insists that meaningful work depends on attention to the particular: this place, this person, this sentence, this obligation. A large language model can process particulars when they are supplied to it, but it has no home in any of them. Left to itself, it tends toward fluent generality: the polished middle of mass language. Berry's warning is not that machines cannot be useful; it is that tools become dangerous when they train us away from the local, the embodied, the accountable, and the lovingly specific.

There is a sharper turn still, deeper than the corporate one. Hannah Arendt, writing in the long shadow of the twentieth century, came to a conclusion that has not yet been refuted. From Eichmann in Jerusalem to her unfinished The Life of the Mind, she argued that the worst things human beings ever do to each other are made possible by thoughtlessness: by the failure of ordinary people to stop and think for themselves. A society of non-thinkers is not just culturally diminished. It is politically vulnerable. Self-government requires citizens who can hold a question in mind long enough to weigh it. Take that capacity away, and you are not just changing the economy. You are quietly changing what kind of culture and community you have.

The Deeper Idea

Long before the neuroscience, the Greeks understood. Thinking is how memory is made. Memory is what thinking is made of. Memory, they knew, is the mother of every art.

Maryanne Wolf, the cognitive neuroscientist who has spent her life studying how the human brain learns to read, would tell the Greeks they were more right than they could have known. The reading brain, the one that does the slow, difficult work of staying with a text, is not the same brain as the one that skims and summarizes. The first brain is built, layer on layer, over years of effortful reading. It is the actual neural substrate of inference, analogy, and the capacity to follow another person's argument all the way down. The second brain does not have this circuitry because such circuitry is built only through use. Wolf's books, Proust and the Squid and Reader, Come Home, are warnings from inside the laboratory.

Augustine, near the end of the Confessions, comes to a beautiful conclusion. The self, he says, is its memory. To know oneself is to walk the long fields and storehouses of what one has held in mind, and to find God there, at the bottom of one's own remembering. Without memory, no self. Without thinking, no memory is worth having. Without a self that thinks and remembers, no one in particular to love, to lead, to be loved by.

This is how the crisis arrives: not all at once, but in a thousand small surrenders, none of which feels like a surrender. Civilizations are not lost in single decisions. They are lost in habits.

Where This Argument Falls Short

The honest answer is that we do not know how this ends.

The first two memory crises took centuries to resolve, and we are looking back at them with the comfort of knowing that the libraries got built, the universities held, and the printing presses produced both Bibles and Newton. That comfort was not available to anyone living through them. The monks who saw the scriptorium become obsolete were not consoled by the knowledge that the Royal Society would one day exist. They were watching their world end. Some of what they tended did, in fact, vanish forever.

The third crisis is no different in this respect. We do not know what we will lose that we cannot replace. We do not know whether what gets built will be worth it. What we do know is what the shadow looks like, because we are already looking at the early version of it: a workforce that produces fluent, articulate, plausible work it could not reproduce on its own; a generation that can perform thinking without being able to think; a culture in which the difference between conviction and consensus quietly disappears.

There is also, on the other side of this, a real possibility of something extraordinary. The architectures already being built, what engineers call retrieval-augmented systems, are an early sign that AI can be trained to think with us rather than for us, anchored in our own primary sources, our own institutional memory, our own actual reasoning. They are not the answer. They are the beginning of one possible answer. There will be others.

Which version we get is not an abstraction. It is decided by the leaders who choose today to require their people to think, to anchor their AI in their own primary sources, to build the discipline that makes deep work still possible. That is the work in front of you.

Implications

For Your Organization. Decide what your institution is for. If it is for producing the kind of work any public model could produce, the model has already replaced you. You simply have not noticed yet. If it is for something genuinely your own, your real work is older than AI: stewardship of the mission, the memory, the people, and the community. AI can serve all four. It cannot decide them. The decision belongs to whoever holds the office, which means, almost always, you.

For Your People. The capacity to reason is built by use, the way a body is built by use. The young people coming up in your organization need work that requires them to think and not just to prompt. Make them write the bad first draft. Make them argue with you on a Wednesday afternoon when the meeting could have been an email. Give them the harder path, sometimes, on purpose. They will resent it, briefly. They will thank you for it, eventually.

For Your Inner Life. Notice what you have quietly stopped doing. The book you used to read in full, and now only summarize. The argument you used to write out, and now only ask for. The decision you used to sleep on and now only confirm. Each of these is a small forfeiture. Each of them, multiplied across years, is a different person at the end of life.

What to Do Now: The Thinker's Code

A small rule. Three movements. Easy to learn and hard to keep, the way the best disciplines have always been. I call it The Thinker's Code, the discipline a leader keeps when working alongside an AI. The order is the whole point.

1. Probe before you Prompt. Before reaching for the model, take ten minutes to probe the question on your own. Sit with it on a notepad. Begin with what the Greeks called phronesis, the practical wisdom of asking what good would look like for the people who will live with the answer. Then map it onto what knowledge engineers call the Known-Unknown Matrix. Some questions have a known good, routine, and data-rich, the kind any trained model handles in seconds. Others depend on tacit knowledge, the unwritten rules and lived experience that never sit cleanly in any database. The most dangerous are the unknown unknowns, where you do not yet know what you do not know. The model collapses these distinctions. Most leaders never notice. This is what we mean by double competence: domain expertise to know which territory you are in, and AI literacy to know what the model is and is not. The mind grows by wrestling, and the model offers to do the wrestling for you. The thinker, before the prompt, refuses the favor.

2. Talk before you Trust. Before you trust what the model tells you, talk to a person who has done this kind of work. A colleague at lunch. An old mentor on the phone. The retired partner who answered on the second ring because no one calls her anymore. Twenty minutes of someone else's voice. Ask in two directions: down into the specific case they still remember, up into the principle they have lived by. This is what knowledge engineers call laddering, and it is how you reach the contextual knowledge no model can hold. Then run a Wizard of Oz test: read them your draft prompt and listen for what they would do that the model would not. That difference is the model's blind spot, and only you can carry it in. The model can summarize what has been written. Only a person can tell you what they have learned by doing. The knowledge that lives between people is denser than the knowledge that lives in any database.

3. Argue before you Accept. Before you take the model's answer as your own, argue against it. One paragraph, written by hand if you can manage it. Ask where the answer came from, and require citations the way a good editor would. The model is built to be plausible before it is built to be true. You will feel the pull to accept it anyway. Psychologists call this automation bias, the human tendency to trust what an algorithm says because it sounds final, because it is formatted cleanly, because your brain has learned to defer to machines. A radiologist misses a tumor because the AI said there was none. A loan officer denies credit because the algorithm scored it that way. The confidence of the machine silences your own judgment. Notice when you are doing it. Then write the paragraph that the model has not given you, the one only you can write because you have lived this kind of question before. AI evaluation frameworks call this the authorship stage: the moment an averaged output becomes a user-validated artifact. The model produces the averaged answer. The argument is what makes the answer yours.

Probe before you Prompt. Talk before you Trust. Argue before you Accept. The Thinker's Code. Say them in the morning. Write them on the wall. Share them with the team. The easier path is the one being chosen for us. The harder path is the one we have to choose ourselves.

We Are Being Sorted

We are living through a sorting event most of us do not yet recognize as one.

The institutions that protect the inner work of thinking will, a hundred years from now, still be themselves. The ones that do not will sound like everyone else, and at some point will lose the right to be listened to at all.

We have done this before. Twice. Both times, the people who came through were neither the ones who refused the new medium nor the ones who surrendered to it. They were the ones who knew which part of being human the new medium was trying to take, and who built the small, patient practices that protected it.

That is our task. The chisel is in our hands. David is still in the marble. And the work has already begun.

References

Arendt, Hannah. Eichmann in Jerusalem: A Report on the Banality of Evil. New York: Viking, 1963.

Arendt, Hannah. The Life of the Mind. New York: Harcourt, 1978 (published posthumously, unfinished).

Augustine. Confessions. Translated by Henry Chadwick. Oxford: Oxford University Press, 2008. See especially Book X on memory.

Berry, Wendell. "Why I Am Not Going to Buy a Computer." In What Are People For? Berkeley: Counterpoint, 1990.

Carruthers, Mary. The Book of Memory: A Study of Memory in Medieval Culture. 2nd ed. Cambridge: Cambridge University Press, 2008.

Eisenstein, Elizabeth L. The Printing Press as an Agent of Change: Communications and Cultural Transformations in Early-Modern Europe. Cambridge: Cambridge University Press, 1979.

Ong, Walter J. Orality and Literacy: The Technologizing of the Word. 30th Anniversary Edition. London: Routledge, 2012.

Plato. Phaedrus. Translated by Alexander Nehamas and Paul Woodruff. Indianapolis: Hackett Publishing, 1995. See especially 274c–275b on Theuth and Thamus.

Postman, Neil. Technopoly: The Surrender of Culture to Technology. New York: Vintage Books, 1992.

Wolf, Maryanne. Proust and the Squid: The Story and Science of the Reading Brain. New York: Harper, 2007.

Wolf, Maryanne. Reader, Come Home: The Reading Brain in a Digital World. New York: Harper, 2018.