The intersection of crises and artificial intelligence (AI) has introduced new risks, particularly in warfare, cybersecurity, and nuclear strategy. AI has amplified the speed at which conflicts escalate, the complexity of decision-making, and the potential for misinformation to shape crisis narratives. From the Pulwama-Balakot episode in 2019 to the 2022 accidental BrahMos missile launch, and from Ukraine to Gaza, AI-driven technologies have played a crucial role in shaping military strategies and crisis management. In South Asia, the adoption of AI in military doctrines by both India and Pakistan raises significant concerns about miscalculations, deterrence stability, and the erosion of traditional crisis management mechanisms.
AI has fundamentally altered electronic warfare and cybersecurity. In 2020, the Pakistan Army introduced a course on Cognitive Electronic Warfare, emphasizing AI’s role in information dominance, cyber operations, and countering adversarial propaganda. In 2022, Pakistan launched the Army Centre of Emerging Technologies (ACET) to advance AI research in cybersecurity and electronic warfare. Meanwhile, India has made substantial progress in integrating AI into its military strategy. The Joint Doctrine of Indian Armed Forces (2017) set the stage for leveraging disruptive technologies, followed by the Land Warfare Doctrine (2018), which emphasized AI-driven military integration, hybrid warfare, and a multi-front operational environment. These developments illustrate how AI is reshaping military postures and the balance of power in South Asia.
The supposedly accidental launch of an Indian BrahMos supersonic cruise missile into Pakistan on March 9, 2022, underscored the risks of unintended escalation in the AI age. The missile, fired from Sirsa, India, traveled 124 kilometers into Pakistani airspace before crashing in Mian Channu. While India later termed it a technical malfunction, the incident demonstrated the fragility of crisis stability between nuclear-armed rivals. Had Pakistan’s early warning and air defense systems been AI-driven with autonomous threat response mechanisms, the risk of misinterpretation and retaliation could have been catastrophic. In a future scenario where AI influences Launch on Warning (LOW) or Launch Under Attack (LUA) doctrines, such incidents could lead to rapid escalation, bypassing traditional human decision-making filters. If the launch of the Brahmos missile was accidental, as claimed by India, the likely consequence of the retaliatory response would have led to unintended results.
The 2019 Pulwama attack and the subsequent Balakot airstrikes demonstrated how AI-powered disinformation campaigns can intensify crises. In the wake of the attack, AI-driven social media manipulation fueled nationalist sentiments in both India and Pakistan, making de-escalation more challenging. Algorithms amplified false claims, including exaggerated battle damage assessments and unverified reports of military engagements. This phenomenon raises concerns about deepfake technology and AI-generated propaganda influencing crisis narratives in future conflicts. Imagine a scenario where AI-powered disinformation falsely depicts a military leader issuing a declaration of war—such manipulations could trigger a chain reaction of retaliatory actions before verification mechanisms kick in.
A similar risk was evident in the 2008 Mumbai attacks, where a hoax call, allegedly made in the name of Indian External Affairs Minister Pranab Mukherjee, threatened military retaliation against Pakistan. The call, received by then-Pakistani President Asif Ali Zardari, nearly pushed both nations to the brink of war before it was later identified as fake. While traditional verification mechanisms eventually de-escalated the crisis, the incident highlighted how false information can dangerously shape national security decisions. In today’s AI-driven information landscape, such incidents could become far more sophisticated. Deepfake technology, AI-generated audio, and synthetic media could fabricate high-level diplomatic conversations or military directives, making it even harder to distinguish reality from deception. AI-powered disinformation campaigns, if not adequately countered, could accelerate crisis escalation, mislead policymakers, and erode deterrence stability. The danger is not just hypothetical—recent advancements in AI-driven cyber intrusions, voice cloning, and algorithmic manipulation of social media make it possible for adversarial actors to fabricate intelligence reports or alter crisis communications in real-time. As AI increasingly influences decision-making, the potential for miscalculation based on false intelligence, hacked diplomatic channels, or AI-manipulated cyber threats could trigger military responses before human verification mechanisms intervene, making crisis management in the AI age far more precarious than ever before.
India’s nuclear doctrine has evolved significantly since its nuclear tests in 1998. While initially committed to a No First Use (NFU) policy, India’s stance appears to be shifting with technological advancements. The 2014 BJP election manifesto suggested revisiting India’s nuclear doctrine, raising concerns in Pakistan, and in 2019, Indian Defence Minister Rajnath Singh stated that India’s NFU commitment was “subject to circumstances.” With AI-enhanced early warning systems, India may move toward Launch on Warning (LOW), increasing crisis instability. Pakistan has long been skeptical of India’s NFU policy, viewing it as a doctrinally flexible stance rather than a firm commitment. AI-driven advancements, particularly in autonomous threat assessment and nuclear decision-making, could further blur the lines between preemption and retaliation, leading to lower nuclear thresholds in future conflicts.
The Russia-Ukraine war has demonstrated how AI and autonomous systems are reshaping warfare. AI-powered drones and loitering munitions have altered battlefield dynamics, cyber warfare driven by AI has disrupted communications, power grids, and command systems, and AI-driven battlefield analytics have provided real-time intelligence for targeting. In Gaza, AI has been used controversially for military targeting, with concerns about algorithmic bias in identifying combatants versus civilians. The increased reliance on AI in targeting decisions without robust human oversight raises moral and ethical dilemmas, making misidentifications and excessive collateral damage more likely. These cases illustrate that AI does not inherently make conflicts more controlled or precise—without proper oversight, AI can amplify miscalculations and unintended escalation.
AI is also transforming cyber warfare, making cyberattacks more automated, intelligent, and harder to detect. Key risks include AI-enhanced cyberattacks on military command and control systems, AI-generated deepfake audio and video to manipulate leadership communications, and hacking of early warning systems, creating false alerts of incoming strikes. For nuclear-armed states like India and Pakistan, a cyberattack on AI-driven missile command systems could create false alarms, triggering an unintended nuclear exchange. The lack of AI-specific crisis management protocols makes such scenarios even more dangerous. To mitigate these risks and prevent the reckless use of disruptive technologies, both Pakistan and India must engage in regulatory frameworks and confidence-building measures (CBMs). These should include robust communication channels, bilateral discussions on disruptive technologies—particularly in command-and-control systems—and agreements on responsible AI use in military operations. Given the fragile security environment in South Asia, the integration of AI in military systems requires immediate attention to prevent unintended escalations and ensure strategic stability in the region.
Saima Sial is an independent analyst and a former South Asia Fellow at the Henry L. Stimson Center, Washington D.C.