The Last Safe Place
Mathematics, the “Human-Authored” coping mechanism, and the Ghost in the Proof.
The Ritual of Purity
There is a new ritual I’ve noticed in AI-adjacent research, a small genuflection performed at the start of papers to ward off evil spirits. You see it in footnotes and acknowledgments:
“This work is human-authored.”
It feels like the “Non-GMO” sticker on a grocery store apple. A signal of purity. An assurance that the insights contained within were forged in the wet, biological firing of neurons, not synthesized in the cold silicon of a GPU cluster.
I gave up on that distinction years ago. Not because I think humans are obsolete, but because “authorship” has always been a bit of a performance. And this month, a paper in algebraic geometry dropped that makes just how shaky that performance has become impossible to ignore.
The paper is arXiv:2601.07222. The authors are prestigious human mathematicians, including Ravi Vakil, the current president of the American Mathematical Society. The subject is the motivic class of maps to flag varieties. Know exactly what that means? Good. Neither did I. It’s not important. Genuinely.
But the insight? The specific, structural reframing that turned a messy, fragmented problem into a clean, linear-algebraic solution?
That came from Google’s Gemini.
The Wrench in the Drawer
Let’s zoom in on the math for a second, because the specifics matter to the story, even if they don’t matter on their own.
Vakil and his collaborators were studying something called a flag variety. Roughly speaking, it’s a geometric object that encodes all the ways you can nest spaces inside one another: a line inside a plane inside a three-dimensional space, and so on. They wanted to understand the behavior of maps winding through this object, summarized by an abstract invariant called a “motivic class.”
Usually, this kind of problem is a nightmare. It’s a landscape of jagged edges and strata, full of corner cases where the geometry changes character. You can compute pieces of it locally, but getting a global picture is like trying to map a coastline by looking through a straw.
Then, while working through the problem in response to a question, Gemini didn’t just crunch equations. It suggested a perspective shift. A structural reframing that amounted to this:
“Stop looking at the map as a geometric curve. Look at it as a linear algebra problem that evolves over time, and pin it down at a single point.”
Once you make that shift, the complexity collapses. The jagged edges disappear. The space turns out to be something disarmingly simple: a general linear group times an affine space, (GL_n \times \mathbb{A}^a). It’s the mathematical equivalent of discovering that a tangled ball of yarn is really just a single, straight strand.
This is what mathematicians mean, whether they say it or not, when they talk about conceptual compression. It’s the part of mathematics we told ourselves was uniquely human. We conceded that computers could calculate, check proofs, and search for counterexamples. But the reframing, the moment where you realize you’ve been asking the wrong question all along, that was supposed to be ours.
Vakil’s reaction says otherwise:
“Gemini’s argument was no mere repackaging of existing proofs; it was the kind of insight I would have been proud to produce myself.”
The Black Box Teaching the Black Box
There is a recursive loop of ignorance forming here.
We are using a tool we don’t fully understand to solve mathematical problems we didn’t fully understand, to produce a proof we can now verify and teach, but only because the machine told us where to look.
This quietly destroys the “human-authored” vanity project. If the critical step in your proof, the moment where chaotic structure resolves into a clean signal, came from a GPU cluster, are you the author? Or are you the curator of a silicon-generated epiphany? And if you can now teach it to an undergraduate, does that distinction actually matter?
We are entering an era where understanding is becoming distinct from derivation. The authors of the paper understand the mathematics perfectly now. But they did not derive the path to that understanding alone. As Vakil himself has noted, a human might eventually have found the same reframing, but it is no longer a certainty.
We are no longer solo explorers. We are editors of a faster mind.
The Drowning of the Goalposts
If this paper had appeared in 2020, if Google or OpenAI had announced that a model had suggested a genuinely novel reparameterization that cracked a long-standing geometric problem, it would have been front-page news. It would have been treated as evidence of a singularity.
Today?
It’s a Tuesday.
It’s a footnote. It’s an interesting thread on Mathstodon.
This is how the future arrives: not with a bang, but with a slow, rising tide that washes away the goalposts. We have normalized the miraculous. We have decided that because AI sometimes hallucinates facts about pizza glue, it can’t possibly be doing “real” reasoning.
Meanwhile, it is quietly handing even the best mathematicians the keys to locked doors.
The result itself is accessible. A strong senior undergraduate could understand it. In a few years, this proof will likely be taught in dual-level algebra courses, not because it is the most important mathematics ever discovered, but because it is a near-perfect example of AI-aided insight.
It will be presented as a tool. “Look,” the professor will say, “the model suggested we treat the map as a quotient.” And the students will nod, because to them, using an AI to find a new geometric perspective will feel as ordinary as using a calculator to take a square root.
Closing: The End of the Monopoly
The “human-authored” sticker is a defense mechanism. A way to claim ownership over a process we are no longer solely driving. We are steering, yes. We are navigating. But the engine, the thing generating momentum and direction, is something else.
The arrival of machines that can suggest genuinely new reframings didn’t break mathematics. It removed the last plausible place we could hide from the fact that we are no longer the only things on this planet capable of generating insight.
We understand the math.
We are starting to realize we might be the junior partners in the lab.