In the decade since the neural architecture version of Google Translate and the invention of transformer architecture for neural networks, we have experienced the most rapid technological breakthroughs since at least World War II, and possibly ever. Artificial intelligence has proven to be a satisfactory substitute for the labor of translators and illustrators and looks to soon be an adequate substitute for call-center workers, computer programmers, paralegals, reference librarians, and radiologists. The best within all these professions can still exceed artificial intelligence, and indeed may find it a complement rather than a substitute for their labor, but the typical member of such occupations should expect artificial intelligence to behave as competition. As capital continues to invest enormous sums in training artificial intelligence, it will provide a usually adequate and often superior substitute for the human capital of more and more workers.

That improving AI is a substitute for labor is widely appreciated, and indeed, megalomaniacal mad-scientist visions of mass white-collar unemployment is part of the pitch decks shown to investors. What is less widely appreciated is that even if the technology stopped improving tomorrow, it would still be an increasingly good substitute for human capital. This is because it is already capable of giving human beings, and especially young people, the choice to idle in stupidity and ignorance.

People use consumer-facing artificial intelligence for all sorts of things. Every few days one learns about some horrific application like communing with the dead, cultivating psychosis, or substituting sycophantic waifubots and simpchats for the frustrations of romance with human beings. But the real killer app for LLMs is cheating on homework.

A graph of OpenRouter data that went viral this August shows that tokens processed by OpenAI dropped precipitously with the end of the school year. I checked the current data on the OpenRouter dashboard and tokens skyrocketed again with the beginning of this school year. OpenAI’s own analysis does not break out queries by date, but it shows that 40 percent of all queries are for what the researchers call “doing” tasks. These tasks are what would traditionally be considered work, mostly writing or editing prose and summarizing or translating texts.

“The user interface asks what dost thou want?”

Like a Great Oxygenation Event for the intellect, the first application of a technology that mirrors human intelligence is to undermine the cultivation of the real thing. John Henry beat the steam-powered drill, albeit at the expense of a heart attack, but the machine would have won by forfeit if years earlier it had done John Henry’s labor and exercise for him and so he never developed the strapping physique necessary to be a steel-driving man in the first place. And the temptation is ever present. Google Docs, Microsoft Outlook, Microsoft Word, and many other apps or browser windows into which one regularly types more than a hundred characters of text have user interface features that let AI do the writing. At every turn, the baby god actively solicits opportunities to do your work for you, no matter how important it is for developing your capacities or maintaining your integrity. The user interface asks what dost thou want? Wouldst thou like thine term paper done? A bit of homework? Wouldst thou like to live LLMiciously?

In my conversations with deans responsible for academic discipline, they have told me they enjoyed only a brief return to normal case loads after clearing the backlog of misconduct cases from Covid-era remote instruction before reaching a new normal of nonstop misconduct cases related to LLM usage. The problem is difficult because whereas cloud services like TurnItIn make it relatively trivial to detect and conclusively establish traditional ctrl-C/ctrl-V plagiarism, one can usually only strongly suspect but not conclusively prove unauthorized LLM usage. TurnItIn has created an AI detector, but many universities decline to use it as it has a low but nontrivial false positive rate: If the true rate of misconduct is 5 percent and the AI detector has a false positive rate of 5 percent and false negative rate of 5 percent, then half of all cases flagged by the AI detector will be innocent.

The obvious solution is to base grades on in-class assignments, but this has both intellectual and practical problems. The intellectual problem is that we sometimes want sustained engagement with a project, as with a term paper. The practical problem is that the marginal demand for education is in online education, where it is impossible to proctor to the same level that one can with blue books in a lecture hall.


One sometimes hears that instead of waging the impossible fight of getting kids not to use AI, we should teach them how to use it. There is a logic to this. When a technology becomes more available, wages go up for those who have human capital that complements the technology. But this raises the question of what sort of human capital is a complement to—as opposed to a substitute for—artificial intelligence and the corollary of whether such human capital is best cultivated through use of artificial intelligence or abstention from it? The usual assumption is that the most valuable skill one can acquire is prompt engineering. This is indeed an important skill to have, but I am skeptical that one learns to interact well with an AI through off-loading reading and writing tasks to it during one’s education.

My experience when I have caught university students making unauthorized use of AI is that the cheaters are too ignorant and lazy to know what good output would look like. Sometimes these errors are very obvious, as when two of my students turned in memos that did not summarize the assigned reading but one with a similar title. Knowing what good output looks like requires skills and knowledge that can only be acquired the old-fashioned way, by doing one’s own work. And I am talking about students at a selective university a few years into the AI boom. How much worse must it be at a junior high chosen at random? And how much worse will it be when students who used AI for their entire time in junior high and high school age into first college and then the labor force?

Personally, I find that judicious and careful use of AI helps me at work, but that is because I completed my education decades ago and have been actively studying ever since. Thirty years in higher education gives me the skills to complement an LLM rather than have it substitute for my own. Decades of doing so without an LLM as a substitute for my labor means I know how to write, how to read, and how to code so I can have an LLM aid me in this. Most important of all, my accumulated knowledge gives me inspiration for new research questions and techniques. I can then ask the AI very focused questions about if anyone has ever previously approached things in a similar way and to provide citations that I can then read for myself. As the economy adapts to AI, those of us who can take a complementary approach to LLMs will be more productive than those who know nothing but how to ask “@grok, is this true?” The problem is that developing the skills needed to interact with AI in a complementary fashion depends on not relying on them for one’s education.

As a friend who works in AI told me, AI heightens the contradictions. It is a boon to those with the motivation and background to cultivate knowledge but it spells total destruction for the system of universal education and credentialing. My worry is that we may run out of people with motivation and background to learn, know, and do. In the future, Gen X and millennial knowledge workers will be the human capital equivalent to pre-war steel. Just as particle detectors need steel forged before atmospheric nuclear testing gave all newly forged steel unacceptable background radiation, we will discover that even if your job mostly consists of interacting with LLMs, doing so well will require people who remember what it was like to read and interpret a document or contrast two ideas without asking an LLM to do it for you.

As AI might ask: Would you like me to expand on the theme of what happens to social stability when the relationship between social classes changes rapidly and the young find their labor superfluous to the needs of capital?