Too Big to Secure
Why China’s smaller, sovereign AI may outlast America’s hyperscale dreams
Two very different philosophies are shaping the future of artificial intelligence—and the chasm is widening fast.
Recent assessments from CFR and allied strategic research centres illuminate a deepening divergence: America’s hyperscaler paradigm versus China’s sovereign localisation approach. An analysis by Vinh Nguyen, CFR’s senior fellow for artificial intelligence and former NSA chief responsible AI officer, deserves particular attention given his unique vantage point - he spent decades building the NSA’s mission to counter China’s cyber threats and served as the youngest employee in agency history promoted to the senior executive ranks. Whilst American technology giants pursue centralised cloud infrastructure at scale, Beijing is systematically architecting jurisdiction-bounded AI systems, shaped by the CSL, DSL, PIPL framework and forthcoming 2025 network-data security regulations.
The empirical evidence is sobering:
Veracode (2025): approximately 45% of AI-generated code introduces OWASP-class vulnerabilities
Microsoft threat intelligence (July 2025): over 200 AI-generated influence content incidents—more than double the July 2024 baseline
IBM: global mean breach cost stands at $4.88M (2024) or $4.4M (2025), depending on analytical vintage
Anthropic (October 2025): model backdoors can be triggered with roughly 250 poisoned documents during pre-training—irrespective of parameter scale (hundreds of millions to multiple billions)
The strategic implications are profound - and Nguyen’s warnings carry particular weight given his role overseeing election security analysis and cyber campaigns across four presidential administrations. Each hyperscaler vulnerability becomes a systemic risk when AI infrastructure operates as a tightly coupled, centralised system. China’s “AI in a box” paradigm - sovereign, bounded, locally controlled - appears less technological isolationism than strategic blast-radius containment, a lesson learned from decades of US cyber dominance that Beijing now seeks to circumvent.
The fundamental question: in an increasingly fragmented digital order where trust boundaries increasingly map to political boundaries, does scale constitute a liability rather than an asset?
America’s model optimises for global reach and efficiency. China’s optimises for jurisdictional control and failure-domain containment. Both entail trade-offs, yet only one is explicitly designed for a world in which digital sovereignty supersedes seamless connectivity.
This represents more than a technical debate. It concerns fundamentally divergent visions of how AI power should be distributed across an increasingly fragmented geopolitical landscape.
Victory may belong not to the swiftest model, but to the most trusted architecture.
We will return to the theme of the battle Royale brewing over competing models of AI dominance in a forthcoming SecDev Flashnote.
Vinh Nguyen’s full CFR analysis, “Securing Intelligence: Why AI Security Will Define Future Trust,” is essential reading for anyone seeking to understand the strategic implications of this architectural divergence: https://www.cfr.org/article/securing-intelligence-why-ai-security-will-define-future-trust
Rafal Rohozinski is the founder and CEO of Secdev Group, a senior fellow at the Centre for International Governance Innovation (CIGI), and co-chair of the Canadian AI Sovereignty and Innovation Cluster.



