Why Turing welcomed chance in discovery
- Anna Mae Yu Lamentillo

- Sep 24
- 4 min read
Alan Turing is remembered as the man who made thinking mechanical—universal machines, halting proofs, crisp bounds on what can and can’t be computed. But read him closely and a different figure appears: a logician who kept leaving the back door ajar for luck. He didn’t worship randomness as magic. He treated it like a tool in the kit—useful whenever tidy procedures ran out of road.
That attitude began where David Hilbert’s grand dream ended. Kurt Gödel showed that no formal system captures all mathematical truths; Turing showed exactly why, by pinning down what a “procedure” is and proving there can be no general one to settle every question. The point wasn’t despair. It was realism. If no fixed method can carry us the whole way, progress will come from hunches, gambles, detours—what Turing called “intuition and ingenuity.” In his essays on artificial intelligence, he didn’t propose a scholastic catechism for thinking; he proposed “child machines,” systems that learn by training, feedback, and yes, chance. Let them make small random moves, see what works, keep it, repeat. It was evolution rendered domestic: not a proof of intelligence, a practice for getting it.
Randomness, in Turing’s world, wasn’t noise to be banished so the signal could shine. It was the spark that got things moving when symmetry and stalemate held everything in place. Anyone who’s ever stared at a proof for hours and then tried something silly—flip the base case, relabel the graph, perturb a parameter—and suddenly found a path forward knows the feeling. Determinism is great when you know the hill you’re climbing. When you don’t, a little roll of the dice beats a noble march into a dead end.
His biology paper on morphogenesis carries the same spirit. The equations are clean, but the patterns—stripes, spots, spirals—depend on tiny nudges: a smidge more of a chemical here, a boundary slightly off there. Those “imperfections” don’t ruin the result; they choose it. This is an honest view of how order arises in the world: lawful dynamics plus small seeds of randomness, amplified into the forms we see. Turing didn’t pretend noise was beneath science. He showed it is often the partner that gets science off the couch.
Decades later, computer science gave us a precise language for why Turing’s hunch about chance was so sane. Algorithmic randomness says a string is random if you can’t compress it—if there’s no shorter program that spits it out. An infinite sequence is random if it passes every effectively checkable test we can dream up. The punchline is deliciously Turingesque: determining perfect randomness is, in general, uncomputable. You can’t build a machine that infallibly separates “true novelty” from “pattern you haven’t spotted yet.” You can test a lot; you can often be right; you cannot be right about everything.
This turns what looks like an aesthetic preference—Turing’s comfort with chance—into something sturdier. When our methods are boxed in by undecidability, randomness isn’t a shrug; it’s a strategy. If you can’t guarantee the right leap by rule, inject variety and see where reality bites. That is as true in machine learning as in mathematics. Random restarts save you from the local maxima your tidy plan can’t escape. Dropout and noise keep models from memorizing the quirks of your dataset. Even in day-to-day research, the “unserious” move—a randomized construction, a scrambled basis, a deliberately perverse example—often reveals a structure your sober plan politely ignored.
There’s another side to the story, though, and Turing would insist on it. Welcoming chance isn’t a license to go fuzzy. His career is a sermon on rigor. The right posture is not “randomness over proof,” but “randomness until proof.” Use stochastic exploration to generate candidates; when something promising appears—an empirical pattern, a conjecture, a learning rule that seems to generalize—slam it with the hardest tests you have. If you can prove it, prove it. If you can’t, be explicit about how you tried to break it and what probability you assign to being fooled. The adult version of fallibilism isn’t “we can’t know”; it’s “we know how and where we might be wrong.”
That stance has civic implications beyond math and AI. We’re building systems that affect credit, policing, medicine, education. The old fantasy—that a sufficiently clever algorithm can deliver certain, neutral truth—was always a fantasy. Turing warned us, before we had smartphones, that there are hard limits on mechanical certainty. The responsible response is not paralysis, it’s transparency: report the randomness you used, the tests you ran, the ways your result changes with a different seed or sample. Make your luck legible.
If Turing were writing today, I suspect he’d roll his eyes at handwringing over whether a chatbot “really understands” and instead propose a new imitation game full of adversarial trials, tool use, and randomized probes—anything to get past metaphysics and toward measurable performance. He’d cheer on learning systems that embrace noise, but he’d also reach for the sharp knives of algorithmic information theory whenever people claimed too much based on glossy demos.
In the end, his lesson is disarmingly practical. The world does not hand us a recipe for truth. Our smartest moves are often the ones that let surprise in. Randomness is not the opposite of reason; it’s the rough edge that lets reason catch on unfamiliar surfaces. Use it to explore. Use proof to secure what it finds. And never forget the boundary Turing drew: some certainties are out of reach, but progress belongs to those who keep moving anyway.
This opinion column is published under the Creative Commons Attribution 4.0 International License (CC BY 4.0). You are free to share, adapt, and redistribute this content, provided appropriate credit is given to the author and original source.




