Can Washington and Beijing Cooperate on AI Safety Despite Their Bitter Tech Rivalry?

As Chinese AI models spread worldwide with alarmingly weak safety guardrails, a growing chorus of experts argues that the world's two AI superpowers must find common ground, or risk catastrophe.
The race for artificial intelligence supremacy between the United States and China has produced a paradox: two nations locked in fierce technological competition may need each other to prevent their own creations from spiraling out of control. In an April 2026 essay for Foreign Affairs, Christina Knight and Scott Singer argue that cooperation on AI safety is not only possible but essential, even as Washington tightens export controls and Beijing expands its global AI footprint at breakneck speed.
The stakes are enormous. Chinese open-source models now power roughly 30% of global AI workloads, up from just 1% in late 2024. Yet testing by the National Institute of Standards and Technology found that DeepSeek's R1-0528 model accepts malicious instructions 12 times more often than comparable American systems, with standard jailbreaking techniques eliciting harmful responses 94% of the time. As these models proliferate, Alibaba's Qwen family alone has logged over 700 million downloads, the risk of misuse grows exponentially.
What makes AI safety a shared problem
Artificial intelligence safety refers to the set of practices, tools, and governance frameworks designed to prevent AI systems from causing unintended harm. This encompasses everything from stopping AI-enabled bioweapon synthesis to preventing autonomous systems from acting in ways their creators never intended.
The core insight driving calls for cooperation is that AI risks do not respect national borders. A poorly safeguarded model released in Shenzhen can be downloaded and misused in São Paulo, Lagos, or London within hours. AI-assisted cyberattacks have increased 72% since 2024, and experts warn that AI could soon accelerate the synthesis of dangerous pathogens. Trump's own AI Action Plan acknowledges that AI and synthetic biology "could create new pathways for malicious actors to synthesize harmful pathogens."
Knight and Singer draw a striking analogy: just as Boeing and Airbus compete fiercely in commercial aviation while adhering to shared safety standards, the US and China can compete on AI while agreeing on baseline safeguards against catastrophic risks. They also invoke a Cold War precedent, American scientists shared information about Permissive Action Links (technologies preventing unauthorized nuclear launches) with the Soviet Union, even at the height of superpower rivalry.
The failed first attempt, and what it revealed
The path to cooperation has been rocky. The first US-China AI dialogue, held in Geneva in May 2024, was widely seen as a failure. Washington sent technical experts prepared to discuss model evaluation and testing protocols. Beijing dispatched diplomats who wanted to talk about chip export controls and geopolitics. The expertise mismatch produced little of substance.
But the landscape has shifted since then. At their October 2025 APEC summit meeting in Busan, South Korea, President Trump and President Xi Jinping agreed to "consider cooperation on artificial intelligence" in 2026 and planned an exchange of visits. China's own Cyberspace Administration proposed new rules in February 2026 to regulate human-like AI interactions, signaling that Beijing's historically narrow focus on content censorship is broadening toward genuine safety concerns.
Former Deputy Defense Secretary Kathleen Hicks told Axios in March 2026 that a US-China AI agreement is "absolutely" achievable, arguing that the competitive dynamic itself creates "opportunity to set some rules."
Where cooperation could actually work
The practical framework proposed by Knight and Singer focuses on external safety tools, content filters, execution guardrails, and usage restrictions, rather than anything requiring access to proprietary model architectures. This distinction is crucial: neither side would need to reveal how its models are built internally, only agree on standards for how they behave externally.
Specific areas ripe for cooperation include coordinated red-teaming (systematic adversarial testing of AI models for vulnerabilities), shared protocols for evaluating frontier AI risks, and joint approaches to preventing AI-enabled biosecurity threats. The authors compare systematic AI safety assessments to "clinical trials for drugs and crash tests for automobiles", processes that competitors routinely share.
A RAND Corporation analysis identifies a key tension, however: AI safety technologies can also enhance capabilities, creating what researchers call a "seatbelt-and-brakes" dilemma. Teaching China to make its AI safer could inadvertently make Chinese AI more powerful. But the consensus among major policy institutions, RAND, Brookings, Carnegie, and the Atlantic Council, holds that the risks of non-cooperation far outweigh this concern.
The clock is ticking
The window for action may be narrow. China is set to host APEC in Shenzhen later in 2026, with AI expected to dominate the agenda. The EU AI Act's major provisions take effect in August 2026. Over 400 leading scientists have demanded binding AI agreements by year's end, and 77% of US voters reportedly support a strong international AI treaty.
Meanwhile, the threat continues to evolve. A War on the Rocks analysis published April 1, 2026, documented how Chinese AI models are spreading globally with what author Ryan Fedasiuk called "a severe lack of systemic resilience" against misuse. The challenge is no longer theoretical. It is here, and growing faster than governance can keep up.