top of page

Why states must give young people the capacity to research AI


If artificial intelligence now shapes how we learn, diagnose, farm, build, and govern, then the capacity to understand and improve it cannot be confined to a handful of well funded laboratories or private platforms. It must be a public capability, taught and practiced by students across the higher education system—and, increasingly, in advanced secondary programs. The state’s role is not merely to regulate the outputs of AI but to ensure that the next generation can study, test, and remake the technology itself. That requires a simple but radical commitment: give students real access to the tools of AI research—compute, data, mentorship, and open evaluation environments—under public rules that protect rights and widen participation.


Three arguments make this obligation essential rather than optional. The first is democratic competence. AI systems are no longer curiosities on the edge of public life; they now mediate hiring, credit, welfare, education, and security. A polity can only govern what a critical mass of its citizens can interrogate. UNESCO’s first global guidance on generative AI in education framed the task plainly: education systems must build human capacity to use, critique, and co-create AI, not just consume it. Turning that principle into practice means enabling student researchers—not only faculty or industry—to probe model behavior, measure bias, and study failure modes on meaningful problems with meaningful resources.


The second is economic and scientific dynamism. States that treat compute and data as shared research infrastructure are already lowering the cost of curiosity. In 2024, the United States launched the National AI Research Resource (NAIRR) pilot to broaden access to the compute, datasets, models, and training that non industry researchers and educators—students included—need to do serious work. The stated motivation was blunt: many researchers and educators lack the critical AI tools required to investigate fundamental questions and to train the next generation. When access is widened, ideas move from classroom to prototype, from prototype to publication, and from publication to start up.


The third is strategic resilience. Concentration of advanced compute and training pipelines in a few firms and geographies creates dependencies that are unhealthy for science and sovereignty alike. Europe’s response has been to make supercomputing a shared public utility through the EuroHPC Joint Undertaking, which runs open calls so academics, public agencies, and companies can compete for time on some of the world’s fastest machines. This is less about prestige hardware than about cultivating a research commons where students learn by doing—on the same class of systems that power state of the art results—while subject to public accountability.


Skeptics sometimes argue that students can learn enough with small models on laptops, and that the frontier should remain the responsibility of large labs. This view mistakes an introduction for an education. The Stanford AI Index has documented the rapid escalation in the cost and computational scale of training state of the art systems; if students never touch modern toolchains or evaluate contemporary models at realistic scales, their learning will lag the science they are supposed to steward. A healthy pipeline mixes both: frugal methods and theory on modest hardware, and capstone opportunities that expose students—under supervision—to industrial grade frameworks, datasets, and evaluation standards.


It is instructive that countries seeking to accelerate their AI ecosystems are designing programs with students in mind. In March 2024, India approved its IndiaAI Mission, which explicitly finances a public AI compute infrastructure of “10,000 or more GPUs” via public private partnership, alongside datasets and capacity building that reach universities beyond the elite tier. The policy logic is straightforward: without affordable access, talent concentrates where resources already are; with access, talent and ideas surface where they are needed. Singapore’s National AI Strategy 2.0 takes a similar view, pairing investments in compute and data with talent pathways that bring learners into research and deployment early. These are not rhetorical gestures; they are fiscal and institutional bets that widen the circle of those who can build and critique AI.


None of this diminishes the need for guardrails; it heightens it. When states underwrite student access to models and compute, they must also require privacy by design practices, auditable logs, and assessment literacy. Here again, public guidance already exists. UNESCO urges countries to pair access with professional learning for educators, with clear policies on data protection and academic integrity. The lesson is to couple capacity with conscience: students should be trained to document data provenance, to publish model cards and evaluation reports, and to treat safety analysis as a first class research output rather than an afterthought.


What, concretely, should governments do? They should stand up a shared national compute layer that allocates time to student teams through competitive, mentored calls; negotiate cloud credits and model licenses that universities can pool; curate sectoral data commons with clear licensing so students can work on real public problems; and fund open evaluations so replication counts. Crucially, access must reach institutions outside major capitals and research flagships. The purpose is not to chase prestige by training the largest models, but to democratize the capacity to ask and answer the right questions—about transparency, fairness, efficiency, reliability, and local relevance.


The stakes are larger than “jobs of the future.” They concern the terms on which societies will know themselves. If AI remains a black box operated elsewhere, students will learn to accept or fear it. If, instead, the state helps them open the box—ethically, rigorously, and at scale—they will learn to improve it. That is the difference between a generation that imitates and a generation that invents. It is also the difference between governing AI and being governed by it.







This opinion column is published under the Creative Commons Attribution 4.0 International License (CC BY 4.0). You are free to share, adapt, and redistribute this content, provided appropriate credit is given to the author and original source.

bottom of page