While the world races to leverage artificial intelligence, militant groups are also exploring the technology, even if their exact objectives remain unclear.
US national security experts and intelligence agencies warn that extremist organizations could use AI to recruit members, produce realistic deepfake content, and enhance cyberattacks.
A user on a pro-Islamic State website last month encouraged supporters to incorporate AI into their operations. “One of the best things about AI is how easy it is to use,” the user wrote in English.
“Some intelligence agencies worry that AI will contribute (to) recruiting,” the user continued. “So make their nightmares into reality.”
Though IS no longer controls territory in Iraq and Syria, the group operates as a decentralized network sharing a violent ideology. Experts say its early recognition of social media’s power for recruitment and disinformation makes its interest in AI unsurprising.
For loosely organized, under-resourced extremist groups—or even a single individual with internet access—AI can mass-produce propaganda or deepfakes, amplifying influence.
“For any adversary, AI really makes it much easier to do things,” said John Laliberte, former NSA vulnerability researcher and CEO of cybersecurity firm ClearVector. “With AI, even a small group that doesn't have a lot of money is still able to make an impact.”
How extremists are using AI
Since programs like ChatGPT became widely available, militant groups have experimented with AI to generate realistic photos and videos. Combined with social media algorithms, such content can attract new recruits, intimidate opponents, and spread propaganda on an unprecedented scale.
Two years ago, extremist groups circulated fabricated images of the Israel-Hamas war showing bloodied, abandoned children in destroyed buildings. The images fueled outrage and polarization while obscuring the actual horrors of the conflict. Similar tactics were used by violent groups in the Middle East and antisemitic organizations abroad.
Following a concert attack in Russia last year that killed nearly 140 people, AI-generated propaganda videos were widely shared online to recruit supporters.
IS has also created deepfake audio of its leaders reciting scripture and used AI to rapidly translate messages into multiple languages, according to SITE Intelligence Group, which monitors extremist activity.
‘Aspirational’ for now
Experts say these groups still lag behind state actors like China, Russia, or Iran and consider advanced uses of AI “aspirational.”
But Marcus Fowler, former CIA agent and CEO of Darktrace Federal, warned that the risks are growing as accessible AI tools expand. Hackers already use synthetic audio and video for phishing, impersonating officials to access sensitive networks. AI can also automate cyberattacks and generate malicious code.
A greater concern is that extremists could attempt to employ AI in developing biological or chemical weapons, compensating for technical gaps, a risk highlighted in the Department of Homeland Security’s recent Homeland Threat Assessment.
“ISIS got on Twitter early and found ways to use social media to their advantage,” Fowler said. “They are always looking for the next thing to add to their arsenal.”
Efforts to counter the threat
Lawmakers are pushing measures to address these dangers.
Sen. Mark Warner of Virginia, top Democrat on the Senate Intelligence Committee, said AI developers should be able to share information about malicious uses by extremists, hackers, or foreign spies.
“It has been obvious since late 2022, with the public release of ChatGPT, that the same fascination and experimentation with generative AI the public has had would also apply to a range of malign actors,” Warner said.
House lawmakers recently learned that IS and al-Qaida have held AI training workshops for their supporters.
Legislation passed by the U.S. House last month requires homeland security officials to assess AI threats from extremist groups annually.
Guarding against AI misuse, Rep. August Pfluger, R-Texas, said, is similar to preparing for conventional attacks.
“Our policies and capabilities must keep pace with the threats of tomorrow,” he said.