Export Controls Are AI Safety
But know what they aren't and how to talk about them
This post is in response to Zilan Qian’s “Why We Shouldn’t Call Export Controls ‘AI Safety.’” I disagree. Export controls are AI safety; people just don’t know how to talk about them. If you work on AI policy, and especially if you work on export controls or compute governance, this post is intended for you.
The “Not Safety” Argument
The central claim by Zilan is that export controls are not AI safety, and the conflation of the two by Americans hinders the cause of AI safety in China and abroad. By conflating the two, we Americans make many faux pas in communicating to Chinese counterparts and shoot ourselves in the foot trying to engage on other safety issues.
I think Zilan is correctly pointing out some of the adverse symptoms, but she misdiagnoses the root of the problem. Export controls are AI safety, but these symptoms are a result of researchers misunderstanding export controls and the mixed demographics of export control supporters.
The Folly of AI Safety
In her post, Zilan shares some anecdotes of AI researchers sharing their work on export controls with Chinese nationals or simply assuming that they support them. I think this is folly on the part of the American researchers, probably predicated on a lack of understanding the nature of export controls.
The folly of the American researcher stems from a bad chain in logic, as follows:
AI safety aims at reducing existential risks, and everyone, regardless of nationality, is interested in that.
Existential risk increases from the proliferation of AI, and export controls reduce that proliferation.
Thus, export controls reduce existential risk. Thus, everyone should support export controls.
The mistake researchers make here is failing to account for who gets to hold the keys while reducing proliferation. Should it just be the U.S.? Should it be the U.S. and China, or just Western democratic societies? The different answers to these questions are the cause of tensions between researchers aiming for the same goal of “AI safety.”
American researchers, especially those focused solely on AI without regards for geopolitics, can be pardoned for making this mental mistake. This is because export controls are the only ostensibly political method for AI safety. When the U.S. creates export controls—even though they do promote global AI safety—we induce a world with winners and losers: those with chips and those without. People can agree that fewer people controlling the levers of powerful AI is good, but people cannot agree who those people should be.
American researchers are often unaware of self-biases and assume that the U.S. or the U.S. plus other democratic nations should be the controllers. This is obviously not clear for others, including Chinese researchers. We often don’t realize, as Zilan says, “your butt decides your head.” This does not detract from the safety value of controls, but it means that we should not assume that researchers abroad will be on board with our agenda.
The further confusion of export controls as “AI safety but political” to “solely political and national security” is easy to make given the diverse motives of export controls supporters. Besides solely safety-focused AI researchers, China hawks and protectionists support the policy for different reasons. The former care solely for issues like existential risks. The latter care because China is an enemy or unethical state, and the U.S. should protect its business and security interests in antagonism to China. I, among many others, find myself in the middle. Export controls do reduce existential risks, and they hopefully hinder the use of AI for human rights abuses, mass surveillance, and the protection of authoritarian regimes.
So, in response to Zilan, who asks, “Why would people so naturally think a person from China will help with export controls-related research,” my response is that we absolutely should not expect such help. You’re right. But that doesn’t mean that export controls don’t minimize existential risk. I also believe that the definition of “AI safety” should encompass more than just tech-originated risks like the existential one, but also user-originated risks like the human rights abuses stated above.
We Can Control and Engage… sort of
The argument against export controls as AI safety also greatly depends on the idea that imposing export controls thwarts the aims of other methods of AI safety. By containing China and not giving them our chips, the U.S. creates antagonism. An antagonized China will be less willing to adopt AI safety practices extolled by the U.S. or the West, because why would you listen to the person who just stabbed you in the chest on matters of your health?
I think this argument is not entirely true. Although in some ways containment hampers the goals of engagement, the two are not mutually exclusive. When I am working on export controls, I sincerely hope for the success of my colleague working on engagement. It firstly helps that the two faces are usually different. While institutions like the U.S. government or RAND are working on export controls, other institutions like METR or Concordia lead the engagement. Even if Chinese counterparts don’t believe I “mean well,” it is more likely that bonds of engagement can grow with organizations that have nothing to do with export controls.
While export controls have been in effect, some efforts of engagement on issues like safety frameworks have been successful. And while engagement from Western institutions on best practices or new research is helpful, I think it is also condescending to think that Chinese companies cannot take measures toward AI safety on their own. With companies like DeepSeek being run by “AGI-pilled” CEOs, government institutions being concerned with AI safety, and general aversion to existential crises by all humanity, I think the idea that China will adopt no safety measures of its own without Western help is ridiculous.
Ultimately, the combination of containment with continued (though slightly hindered) engagement to be the best bet for AI safety. This is better than just containment, which is better than just contingent engagement.
This conviction might be more believable if analogized to nuclear proliferation. A world where we prevent other countries from obtaining nuclear weapons while also engaging in safety on issues like nuclear power, waste disposal, etc. is optimal. This is better than just preventing nations from obtaining nuclear weapons, but even this is better than if we did not impose any concrete restrictions on proliferation but just opted for soft engagement and diplomacy.
This does not mean that the current state of export controls is perfect. I do agree with Zilan that export controls preventing Chinese development of frontier models do unnecessarily hamper engagement. The safety necessity for limiting Chinese frontier models is not immediately clear to me, given that current frontier models seem well below the threshold for “dangerous.”
To promote healthy competition among international models, the U.S. could opt to restrict all sales of chips while allowing Chinese companies broad access to cloud compute. Of course, some restrictions would be in place to monitor dangerous workloads, and such a policy would likely be taken as a condescension by the Chinese state or Chinese companies; but this is still better than the status quo. Such a policy could allow greater Chinese technological development while still limiting the lever controllers of compute.
Advice for Researchers
Zilan ends her piece with advice for AI researchers seeking to engage with China, all of which is fantastic. I’ll add some more here, particularly for those who work on export controls:
Export controls are political. So don’t assume that everyone, especially non-Americans, will adopt the same view that you have of them. These policies actively create winners and losers in the international system, even if they are for the sake of global AI safety.
You can be an expert on compute specifications and chips, but that doesn’t make you an expert on China. So, if you want to craft export controls on chips against China, try to know both. Go and live in China if you can, for as long as you can, speaking to people both involved in the AI space and not.
We all have biases, and we are typically awful at recognizing them. Adopt perspective the best you can, and prefer to listen rather than speak with Chinese nationals. The media we consume and circles we run in tend to reinforce the Western bias such that we don’t even realize it’s there. The “unlearning” that comes from engagement and listening is one of the best qualities for creating clear-eyed policy.
Export controls are not the bane of AI safety, but rather an important component of it. Their nature as a political tool that creates tension among states does not detract from their safety value, but instead means that practitioners of AI safety must better understand and explain them. We cannot expect that others will welcome them as they might other safety work, and expecting them to do so will only set back the progress we hope to make.


