Raise a child, not a god: Why AI should grow up like we do
- Anna Mae Yu Lamentillo

- Sep 26
- 2 min read
When Alan Turing imagined artificial intelligence, he didn’t picture a finished adult mind striding out of a lab. He proposed “child machines”—modest systems that learn through experience, guidance, and a bit of luck. It’s a humane idea from the century’s sharpest logician: don’t build a genius; raise one.
We’ve tried the opposite. We mint “adult” AIs—pretrained on oceans of text, festooned with benchmark medals—and act shocked when they hallucinate, bluff, or break under pressure. The child-machine lens explains the failure and points to a fix. Children don’t absorb a static library; they learn by trying, erring, and being steered. They practice before they theorize. They grow inside communities with norms and boundaries. Our systems need the same arc.
Start with curriculum. Giant pretraining plus a sprinkle of reinforcement is cramming, not education. If we want dependable systems, we need staged syllabi: tasks that progress in difficulty, explicit competence checkpoints, and deliberate practice on failure modes. Make models show their work, state uncertainty, and treat calibration and honesty as core subjects, not electives.
Environment matters, too. Kids don’t just read; they act. Put models in tool-rich settings—calculators, search, code, simulated labs—so we judge them by what they can do, not just say. Turing’s instinct was operational: evaluate intelligence by performance, not metaphysics. A model trapped in a chat box is a student who never leaves the library.
Teaching must be real teaching, not crowdsourced thumbs. Experts need interfaces to shape habits: targeted lessons, counterexamples, commentary. Think studio critiques, not Yelp stars. The best data might be a small sequence of well-chosen problems with notes from people who actually understand the domain.
Parenting is governance. Children test boundaries; so will models. They need rules they can’t negotiate away, audits they can’t dodge, and consequences they can’t ignore: model cards, red-teams, incident reports, kill switches, rate limits, staged deployment. This isn’t about strangling innovation; it’s about making mistakes survivable.
The metaphor also resets expectations. Call AIs “children,” not “gods” or “oracles,” and we’ll stop outsourcing judgment to an authority we don’t understand. We’ll also treat error as part of growth—while keeping systems in “safe to fail” roles until they pass real exams.
Randomness belongs, but as a means. Play, exploration, and noisy search help avoid brittle habits. The adult move is to turn luck into knowledge: lock in good behavior with proofs where possible and with rigorous tests where not.
Policy follows: fund long-horizon curricula for reasoning and safety; reward failure analyses and lesson plans, not just demos; require staged licenses so capability expands only after independent exams—like pilots and physicians. Maintain humility about limits; uncertainty is not panic fuel, it’s a design constraint.
The god-machine myth flatters technology but diminishes us. It turns citizens into spectators. Turing offered a better path. Intelligence isn’t unveiled; it’s educated. If we want AI worthy of trust, we should stop pretending to conjure adults—and start raising them.
This opinion piece is published under the Creative Commons Attribution 4.0 International License (CC BY 4.0). You are free to share, adapt, and redistribute this content, provided appropriate credit is given to the author and original source.




