In an age where technology intertwines with every facet of human life, a new form of conflict has emerged, one that targets not cities or soldiers but the very essence of human thought. Cognitive warfare, a term gaining traction in military and academic circles, represents a sophisticated, non-kinetic strategy to manipulate perceptions, beliefs, and decision-making processes. Unlike traditional warfare, which relies on physical force, cognitive warfare exploits the vulnerabilities of the human mind, leveraging advanced technologies and psychological tactics to achieve strategic objectives. As nations like China and Russia refine these methods, the dangers of cognitive warfare—ranging from societal polarisation to the erosion of democratic institutions—pose an unprecedented threat to global stability. I outline what cognitive warfare is, its key threats, and why it demands urgent attention.
Cognitive warfare is the deliberate manipulation of human cognition—how individuals and groups perceive, think, and act—to gain a strategic advantage over an adversary. NATO's Allied Command Transformation defines it as "activities conducted in synchronisation with other Instruments of Power, to affect attitudes and behaviours, by influencing, protecting, or disrupting individual, group, or population level cognition." Unlike traditional psychological operations or propaganda, cognitive warfare integrates cyber capabilities, artificial intelligence (AI), social media, and insights from cognitive psychology and neuroscience to target the subconscious and emotional drivers of decision-making.
The concept is not entirely new; manipulation of public opinion has historical roots, from Sun Tzu's strategies to Cold War-era disinformation campaigns. However, today's digital landscape amplifies its reach and precision. Social media platforms, big data analytics, and AI-driven tools enable actors to micro-target individuals with tailored narratives, exploiting cognitive biases like confirmation bias or fear-based reasoning. For example, China's "three warfares" strategy—combining public opinion, psychological, and legal influence—uses cognitive warfare to shape perceptions without firing a shot. Russia's disinformation campaigns during the Ukraine conflict similarly aimed to sow distrust and confusion, manipulating global perceptions of the war's legitimacy.
At its core, cognitive warfare seeks to "change not only what people think, but how they think and act," potentially fracturing societies by undermining trust in institutions and collective resilience. It operates in the "gray zone," below the threshold of armed conflict, making it covert, hard to detect, and challenging to counter.
The rise of cognitive warfare introduces multifaceted threats that exploit the interconnectedness of modern societies and the vulnerabilities of human cognition. Here are the primary dangers it poses:
Cognitive warfare's most insidious threat is its ability to erode trust in institutions, media, and even reality itself. By flooding digital platforms with disinformation, deepfakes, and synthetic media, adversaries can create an "information vacuum" where distinguishing fact from fiction becomes nearly impossible. Russia's 2008 South Ossetia campaign, for instance, used media to paint Georgia as the aggressor, shaping international perceptions despite evidence to the contrary. Such tactics amplify polarisation, as seen in China's efforts to exacerbate divisions among anti-China groups in Taiwan.
This erosion can destabilise societies, fostering cynicism, apathy, or paranoia. NATO warns that, in its extreme form, cognitive warfare can "fracture and fragment an entire society," weakening its collective will to resist external threats. In democracies, where public opinion shapes policy, this loss of trust can paralyse governance and embolden authoritarian narratives.
Cognitive warfare poses a direct threat to democratic integrity by targeting electoral processes and public discourse. Adversaries use disinformation, bot-driven amplification, and astroturfing to manipulate voters, as seen in China's campaigns against Taiwan's Covid-19 response. These operations exploit cognitive biases, reinforcing existing beliefs or sowing confusion to suppress voter engagement.
The use of emerging technologies like AI-generated deepfakes or synthetic media heightens this risk, creating hyper-realistic content that can mislead even discerning audiences. Such tactics can discredit candidates, inflame tensions, or undermine faith in electoral outcomes, threatening the foundation of democratic governance.
The convergence of AI, neuroscience, and digital platforms has transformed cognitive warfare into a precision weapon. AI enables adversaries to analyse vast datasets, tailoring propaganda to individual preferences based on their digital footprints. China's development of "Intelligent Psychological Monitoring Systems" for soldiers, which track emotional and psychological states, illustrates how technology can manipulate cognition at scale.
More alarming is the potential of "NeuroStrike" technologies, which could disrupt cognitive functions like perception or motor skills through covert, non-kinetic means. These tools, still emerging, could target military leaders or civilians, impairing decision-making without detectable traces. The covert nature of such technologies complicates attribution and defence, leaving societies vulnerable to strategic disruption.
Cognitive warfare exploits subconscious and emotional vulnerabilities, bypassing rational deliberation. By inducing fear, confusion, or anger, adversaries can manipulate behavior, as seen in China's military intimidation tactics against Taiwan, which foster self-censorship through stress. Research shows that confusion or frustration can reduce scrutiny of new information, making individuals more susceptible to disinformation.
This manipulation can have long-term effects, such as "mental damage, confusion, and hallucinations," potentially forcing adversaries to capitulate without physical conflict. In civilian contexts, it can marginalize groups, exacerbate social divisions, and trigger civil unrest, as NATO warns in its Cognitive Warfare Concept.
Cognitive warfare provides adversaries with a low-cost, high-impact tool to achieve strategic goals without kinetic escalation. China's focus on "intelligentisation" and Russia's "political warfare" demonstrate how cognitive operations can weaken adversaries' resolve, disrupt alliances, or shape global narratives. For example, China's accusation of U.S. cognitive warfare over a South China Sea photo illustrates how even symbolic acts can be weaponised to shift perceptions.
By operating below the threshold of war, cognitive warfare evades traditional deterrence mechanisms, complicating responses from liberal democracies bound by legal and ethical constraints. This asymmetry allows authoritarian regimes to gain leverage, as seen in Russia's campaigns to undermine NATO's unity.
The dangers of cognitive warfare extend beyond immediate tactics to long-term societal and geopolitical consequences. First, its covert nature makes detection and attribution difficult, delaying countermeasures and allowing adversaries to operate with impunity. Second, the reliance on emerging technologies like AI and neurotech outpaces current governance frameworks, leaving societies exposed to unregulated threats. Third, the targeting of civilians and non-combatants blurs the line between war and peace, raising ethical and legal questions about what constitutes an "act of war" in the cognitive domain.
The most profound danger is the potential to undermine human autonomy. By manipulating subconscious processes, cognitive warfare could erode free will, turning individuals into unwitting tools of adversarial agendas. Cognitive warfare "exploits advances in digital technology" to control human will, a sentiment echoed in fears of AI-driven manipulation. If unchecked, this could lead to a dystopian reality where trust, truth, and democratic values are casualties of a silent war.
Addressing cognitive warfare requires a multifaceted approach. Policymakers must regulate emerging technologies, balancing innovation with cognitive security, as suggested by NATO's call for governance frameworks that define cognitive aggression. Public education is critical to foster media literacy and critical thinking, enabling citizens to resist manipulation. Militaries, like the U.S., are developing training programs to enhance cognitive resilience, using simulations to counter biases and disinformation. NATO's Cognitive Warfare Concept emphasises collaboration, real-time monitoring, and AI-driven detection systems to track campaigns as they unfold.
Ethically, democracies must navigate the use of offensive cognitive measures carefully, ensuring compliance with principles of necessity and proportionality, especially in kinetic conflicts. Projects like France's Gecko initiative, which simulates cognitive warfare scenarios, underscore the need to prepare decision-makers for this new battlefield. Above all, fostering societal resilience through transparent governance and robust civil communications is essential to withstand cognitive attacks, as NATO's founding treaty emphasizes.
Cognitive warfare is not a distant threat but a present reality, reshaping conflict in ways that challenge traditional defences. By targeting the human mind, it exploits our deepest vulnerabilities—our biases, emotions, and trust—using technology to amplify its reach. Its dangers, from societal fragmentation to the subversion of democracy, demand urgent action. As adversaries like China and Russia refine these tactics, the global community must unite to protect cognitive autonomy, strengthen resilience, and redefine security in an era where the mind is the ultimate battlefield. The stakes could not be higher: in cognitive warfare, the loss of trust and truth threatens the very fabric of free societies.
Emeritus Professor of Cognitive Sciences at Bordeaux Institute of Technology
Key takeaways
- Cognitive warfare explores the potential manipulation by hostile actors using cognitive science, such as propaganda and disinformation.
- It encompasses operations aimed at corrupting the adversary's thought processes and altering their decision-making capacity using a scientific approach.
- It affects the cognitive capacities of individuals through the use of technologies, which can influence attention and reactions in the short term, and cognitive structure in the long term.
- To deal with this, we need to physically protect people in strategic situations and promote the sensible use of digital technology, despite the challenges.
- The Gecko project aims to develop systems for exploring cognitive warfare in the context of fictitious crises, in order to prepare those involved in national security operations.
"Cognitive warfare," an expression that appeared in 2017 in the public speeches of American generals and was quickly taken up by scientists and political scientists, is as worrying as it is fascinating. What does it mean exactly? We take a look at this new concept with Bernard Claverie, professor of cognitive science at the Bordeaux Polytechnic Institute and founder of the École nationale supérieure de cognitique.
The concept of cognitive warfare is now very much in vogue in the world of defence. How did it originate?
Bernard Claverie: The concept is dual – civil and military – and is also known as "cognitive dominance" or "cognitive superiority". It came to the fore around fifteen years ago in the United States. Initially, it denounced the potential opened up in the field of manipulation by the considerable advances in cognitive science, and expressed suspicion that they might be put into practice by hostile states or organisations. Until recently, psy-ops (psychological operations), including propaganda and disinformation, as well as offensive marketing in the civilian sector, were based on fairly sketchy concepts of cognitive processes, which were still poorly understood. These operations therefore attempted to control what they could control, i.e. the information disseminated to enemies, competitors or consumers, in the hope of influencing their decisions and behaviour.
But the development of the so-called "hard" cognitive sciences – i.e. non-interpretative, verifiable and quantifiable – has changed all that. These disciplines study thought as a material object, from the converging points of view of various fields of knowledge: neuroscience, linguistics, psychology, analytical philosophy and the digital sciences, including AI. Their results show that it is possible to precisely target the cognitive processes themselves, and thus directly modify the opponent's thought processes.
How can we define cognitive warfare today?
We are faced with a new threat, the boundaries and capabilities of which we are still trying to understand. If we must define it, we can say that cognitive warfare is at the very least a field of research – and probably a way of contributing to the preparation and conduct of war or hostile action – implemented by state or non-state actors. It covers operations aimed at distorting, preventing or annihilating the adversary's thought processes, situational awareness and decision-making capacity, using a scientific approach and technological, and in particular digital, means.
Could you give us some examples of actions that could be covered by this concept?
Cognitive warfare uses technology as a weapon. It can use invasive technologies to alter the medium of thought, the brain, and more broadly the nervous system that underpins its functioning. In autumn 2016, for example, some forty employees of the Department of Defence at the US embassy in Cuba suddenly developed strange incapacitating symptoms, which have since been dubbed "Havana syndrome". It was suspected that a targeted manoeuvre by an enemy power had exposed these people to neurobiological alterations through targeted radiation.
Cognitive warfare can above all take advantage of digital technologies to disrupt specific cognitive functions (memory, attention, communication, emotions, etc.) in targeted individuals. Examples include sending personalised text messages to members of parliament caught up in a voting session about their relatives, or sending photos of dead children to military decision-makers involved in an operation. The aim is to disrupt short-term thinking by influencing attention, decision-making and reaction.
However, and this is the most worrying aspect, there is a suspicion that these operations are taking place quietly over a long period of time. Using cognitive biases, they modify the thinking habits of the victims and have lasting, even irreversible effects on the cognitive personality, i.e. the way in which an individual processes information. For example, a pilot may be conditioned to react in the wrong way in a specific situation, a technician in charge of maintaining a machine may have their motivation gradually subverted by "digito-social" influences, or individuals may be radicalised within identity-based groups via social platforms, in order to convince them, apparently of their own free will, of the moral rightness of lethal operations. The actions are widespread, involving both the digital and real worlds. Proof of a deliberate attack can then be much harder to establish, especially as the detection of a cognitive effect is often too late and the targeted person naturally tends to minimise the effect, or even to conceal the fact that they have been targeted.
As you pointed out earlier, digital resources seem to be omnipresent in cognitive warfare…
We can no longer live without digital technology: it shapes our way of thinking from a very early age, so it has a powerful influence on our intelligence and emotions, our minds and our pleasure, our ways of thinking and planning.
What's more, the hegemony of predatory companies in the organisation of the cyber world, combined with the fragility of the legal systems overseeing new practices, has very quickly attracted the interest of leaders and ideologues, who have taken advantage of this to find the means to carry out their projects. Attackers rely on the skills and resources of these private companies or on the proxies of unscrupulous states, often with the help of ideological accomplices, i.e. people subjected to distorted thinking who become relays for altering the thinking of others.
The tools of digital hyperconnectivity are thus turning the cyber world into a gigantic theatre of operations, unfortunately with the complacency, even dependence, of users who, for the most part, prefer risk to reason.
How can we protect ourselves from these attacks?
We need to try and act proactively. Beyond the physical protection of individuals in strategic situations, part of the solution would be to free ourselves from our addiction to digital technology or to learn to use it sensibly and objectively. However, this goal seems unattainable today… The development of critical thinking, the verification of information, mistrust of content shared on the Internet, and disconnection as often as possible offer another protection, fallible but already useful… however, can it be imposed?
For military personnel, political figures and strategic industrial players, who are the first targets of short-term cognitive actions, it is possible to resort to specific and adapted awareness-raising campaigns. The Gecko project1 aims to develop systems for exploring cognitive warfare in fictitious crisis situations, to prepare civilian and military decision-makers and operational staff involved in national security operations in France and overseas for the risks involved. In some cases, the use of digital decision support or decision monitoring tools could also prove effective. We are still in the early stages of identifying weapons, and therefore of combating this new form of warfare.
We need to discuss the ethical dimensions of this type of cognitive action. A democracy is vulnerable to this kind of attack… but can it simply carry one out itself?