Who Controls the AI Future?
The race between AI sovereignty and digital feudalism will determine which countries remain masters of their own destiny
In January 2010, a group of senior American military officers gathered at the U.S. Army War College in Carlisle Barracks, Pennsylvania, for a workshop titled “War is War: Cyberspace Operations and the War Fighter.” The collaboration between the US Army War College’s Center for Strategic Leadership and the SecDev Group explored the influence of information technology on military planning capabilities.
As part of the exercise, the teams were asked to recreate the logistics for D-Day, the greatest amphibious invasion in history, using the same staffing levels available in 2010 but without any digital technology. No computers, no spreadsheets, no automated systems, just the tools available to Dwight Eisenhower and his staff in 1944.
The result was sobering. They couldn’t do it. Despite their sophisticated training and decades of experience, modern military planners lacked the institutional knowledge, manual calculation skills, and analog coordination abilities that enabled their 1944 counterparts to orchestrate the liberation of Europe with slide rules and paper maps. Entire skill sets had atrophied, replaced by dependence on digital automation.
Modern military planners could not recreate D-Day logistics without computers as entire skill sets had atrophied
This exercise, conducted in 2010 as artificial intelligence was still in its deep infancy, now reads like prophecy. As AI becomes the defining technology of the 21st century, racing toward ubiquity at breakneck speed, the lesson from that war college classroom becomes increasingly urgent: when critical capabilities migrate to machines, losing access to those machines can mean losing the capabilities entirely.
The numbers tell the story of AI’s meteoric rise. Global adoption has surged to 378m users in 2025, a 64m jump from the previous year and the largest increase ever recorded. This represents more than triple the 116m users from just five years ago. Enterprise adoption has been even more dramatic, accelerating from 20% of companies in 2017 to 78% in 2024, with the market forecast to grow at 35.9% annually through 2030, potentially reaching $1.81 trillion by decade’s end.
This growth has unleashed what might be called the “great rewiring” of the global economy. Nowhere is this more evident than in software development, the nervous system of modern commerce. AI now generates 41% of all code, with 256 billion lines written in 2024 alone. Some 82% of developers now rely on AI tools to help write code, while 68% turn to these systems when problem-solving. In essence, the machines are increasingly writing the instructions for other machines.
The implications stretch far beyond Silicon Valley. Software development underpins virtually every modern industry, from banking to healthcare to logistics. When Microsoft’s chief executive notes that AI writes “up to 30% of code” in certain divisions, or when companies like Chegg and Duolingo explicitly cite AI competition as a reason for layoffs, it signals a broader transformation. The IT sector’s unemployment rate jumped from 3.9% to 5.7% in a single month in early 2025, suggesting that the disruption is already underway.
From one in five companies to three in four: AI adoption in seven years
Yet this is merely the beginning. As one startup founder recently discovered, working with AI coding assistants allows a single experienced engineer to accomplish what previously required a three-person team. Scale this productivity leap across entire industries, and the structural implications become staggering. Consulting firms, long bastions of human expertise, find themselves vulnerable as clients realize they can operationalize knowledge directly rather than pay premium rates for analysis that AI can increasingly provide.
The new digital feudalism
This transformation might be manageable if AI capabilities were widely distributed. Instead, they are concentrating in the hands of remarkably few actors. AI development is increasingly controlled by what researchers describe as a US-China duopoly, relegating most other nations to middle-power status in the AI ecosystem.
American companies benefit from the global reach of English and vast datasets scraped from the internet. Chinese firms draw on a domestic population of 1.4 billion users generating unprecedented volumes of Chinese-language data. These structural advantages create what economists call “network effects”: the more data and users a company has, the better its AI becomes, attracting even more users and data in a virtuous cycle that’s nearly impossible for competitors to break.
The concentration extends beyond data to infrastructure. The AI supply chain has become more concentrated than nuclear materials production, with many critical steps controlled by just one to three companies worldwide, compared to 6-59 manufacturers for dual-use goods under international nuclear oversight. When a handful of firms control the chips, algorithms, and computing power that drive AI, they essentially control the digital infrastructure upon which modern economies depend.
A handful of firms now control the cognitive infrastructure of civilization
This is not merely market dominance in the traditional sense. Unlike previous technological revolutions, AI systems increasingly make decisions autonomously, setting prices, approving loans, diagnosing diseases, and even writing the news. When such power concentrates in a few hands (whether corporate boardrooms in Silicon Valley or government offices in Beijing), it creates what might be termed “digital feudalism,” where most of humanity becomes dependent on algorithmic overlords.
The $10 trillion prize
This concentration creates an irresistible target for those who would exploit it. Global cybercrime already costs an estimated $10.5 trillion annually, representing one of the world’s largest economic sectors. As organizations become more dependent on AI systems for core functions, these systems naturally become high-value targets for nation-state actors seeking strategic advantage, cybercriminal organizations pursuing financial gain, and adversarial actors aiming to influence or control AI-dependent organizations.
The attack surface is vast and growing. Unlike traditional cyber threats that target specific systems, AI manipulation can be subtle and persistent. A compromised AI system might gradually bias its outputs, steering decisions in ways that benefit attackers while remaining undetected for months or years. When a single AI system influences millions of decisions daily (from stock trades to medical diagnoses to news recommendations), the potential for manipulation becomes almost limitless.
Consider a hypothetical scenario: a sophisticated adversary compromises the AI systems used by a major cloud provider. Rather than destroying data or demanding ransom, they subtly alter the algorithms to bias financial markets, skew medical diagnoses, or influence political sentiment. The effects would ripple through thousands of dependent organizations, potentially affecting millions of decisions before detection. This is not science fiction; it is the logical evolution of current trends in both AI adoption and cyber warfare.
The alternative path: small and specialized
Yet the future need not be entirely dominated by a few massive AI systems. A counter-revolution is quietly underway, led not by tech giants but by pragmatic enterprises seeking alternatives to “digital feudalism.” Small Language Models (SLMs), compact, specialized AI systems with fewer than 10 billion parameters, are proving that bigger is not always better.
Recent research reveals a striking pattern: specialized, domain-specific AI models often significantly outperform general frontier models in specific applications, achieving 95% accuracy compared to 70-80% for their larger cousins. A compact model trained exclusively on medical data, for instance, may outperform a general-purpose giant on healthcare tasks while consuming a fraction of the energy and computational resources.
Microsoft’s Phi-3 Mini, with just 3.8 billion parameters, rivals models ten times larger on reasoning tasks. IBM’s Granite models cost between three and 23 times less than large frontier models while matching or outperforming similarly sized competitors. These aren’t academic curiosities: global sports institutions use specialized models tuned with their own data to enhance fan experiences, while IBM deploys them internally to power human resources platforms.
The economics are compelling. SLMs can reduce energy usage by 90% compared to frontier models while delivering faster response times and requiring far less specialized hardware. Perhaps more importantly, they can be fine-tuned and deployed locally, reducing dependence on external cloud providers and enabling organizations to maintain control over their AI capabilities.
A fleeting window of opportunity
Yet the current accessibility of small language model development represents what may be a fleeting historical moment. Today’s ability to build capable, specialized AI systems stems from a confluence of factors that may not persist indefinitely. Research shows that companies release AI technologies as open source for diverse strategic reasons: building developer ecosystems, attracting top talent, establishing technical standards, and fostering innovation networks that ultimately benefit their broader business objectives.
Meta’s release of LLaMA models, despite licensing restrictions, democratized access to high-quality foundation technologies. Google’s TensorFlow and PyTorch frameworks became industry standards precisely because their open availability created vast communities of trained developers. Microsoft’s various AI tool releases reflect similar ecosystem-building strategies. These corporate decisions, driven by competitive dynamics rather than altruism, have inadvertently created unprecedented access to the building blocks of AI development.
With the exception of computational capital, the capability to build capable domain-specific small language models now lies within reach of anyone with the intellectual wherewithal, innovative ideas, and creativity to pursue them. A university research team, a government agency, or a well-funded startup can access foundational technologies that would have been impossible to replicate just years ago.
This represents an extraordinary opportunity, but likely a temporary one. As the AI industry matures and competitive advantages become clearer, the current openness may contract. Future regulatory frameworks could impose restrictions that inadvertently favor large players, or companies may retreat from open strategies as they better understand their intellectual property value. The question facing potential AI developers today is whether they will seize this moment of relative accessibility while it lasts.
Reclaiming sovereignty
This technological shift opens a path toward what might be called “AI sovereignty”: not the nationalist fantasy of building a domestic ChatGPT competitor, but the pragmatic goal of maintaining critical capabilities even when access to foreign-controlled systems becomes constrained.
The choice is not between nationalism and dependency, but between sovereignty and vassalage
For most middle powers (countries that possess technological sophistication but lack the resources of the US or China), this represents both opportunity and necessity. Rather than attempting to compete across all AI domains, these nations can focus on specialized capabilities in sectors of strategic importance. A country might develop world-class AI for agriculture, financial services, or manufacturing while relying on foreign systems for less critical applications.
The approach requires what researchers call “heterogeneous architectures”: systems that use small, local models for routine tasks while reserving large, cloud-based models for exceptional cases requiring broad expertise. Think of it as AI autarky for critical functions, with global integration for convenience features.
Singapore offers an instructive example. Rather than trying to match American or Chinese AI giants, it has positioned itself as a neutral hub where both ecosystems operate, creating a unique coordination layer where frontier technologies meet real-world applications without triggering geopolitical tensions. While not every country can replicate Singapore’s specific advantages, the broader lesson is clear: middle powers can secure roles of consequence by focusing on governance, regulation, applied innovation, and specialized research rather than raw computational power.
The choice ahead
The current trajectory toward AI hyper-concentration is not inevitable, but it is accelerating. As AI systems become more capable and pervasive, the window for developing alternative approaches is narrowing. The choice facing governments, enterprises, and societies is not between nationalism and globalization, but between proactive development of sovereign capabilities and passive acceptance of technological dependence.
This third wave of the information revolution (based on knowledge processing rather than mere data transmission) confronts humanity with questions that have no easy answers. The speculative risks of artificial general intelligence, while worthy of attention, pale beside the immediate challenge of preventing a few companies from controlling the cognitive infrastructure of civilization.
In an age when algorithms determine opportunity, who controls AI controls the future
The path forward requires neither technophobic resistance nor uncritical embrace, but rather the kind of strategic thinking that enabled previous generations to harness electricity, nuclear power, and the internet for human benefit while managing their risks. The stakes could not be higher: in an age when algorithms increasingly determine economic opportunity, social mobility, and even democratic outcomes, the question of who controls AI is fundamentally a question of who controls the future.
The D-Day planners of 1944 succeeded because they possessed both the tools they needed and the knowledge to use them effectively. Their modern counterparts failed because they had become dependent on systems they neither controlled nor fully understood. As AI reshapes the global economy at unprecedented speed, the lesson for nations, companies, and individuals is clear: maintain the capability to think and act independently, or risk losing the ability to do either.
Rafal Rohozinski is the founder and CEO of Secdev Group, a senior fellow at the Centre for International Governance Innovation (CIGI), and co-chair of the Canadian AI Sovereignty and Innovation Cluster.
The Risk Ahead: A New Intelligence Series
SecDev’s geopolitical risk practice - builds on three decades of fieldwork across 120+ countries industrialized into on-demand strategic advantage. The era of treating geopolitical risk as an externality is over. Supply chains now span hostile borders, critical technologies depend on adversarial states, and market access hinges on diplomatic whims. What was once the domain of foreign ministries has become every CEO’s problem. SecDev’s Intelligence as a Service delivers tiered, contract-free engagement: from real-time assessments that move faster than markets to deep-dive analysis that uncovers the networks and contacts buried in geopolitical complexity. The question isn’t whether geopolitical shocks will hit your business - it’s whether you’ll see them coming.




