Faith-Based Daily Awareness Post 9 December 2025

Faith-Based Security Headlines

These updates are shared to help raise the situational awareness of Faith-Based organizations to best defend against and mitigate the impacts from all-hazards threats including physical security, cybersecurity, and natural disasters.

 

Open-Source AI Models Easily Manipulated to Generate Antisemitic and Dangerous Content, New ADL Research Shows

 

The Anti-Defamation League (ADL) found that many widely used open-source large language models (LLMs) can be manipulated to produce hateful, antisemitic, or otherwise dangerous content. When researchers tested 17 of these models, they discoverer serious flaws: none reliably refused harmful or antisemitic prompts. In tests requesting information about real-world locations, around 44% of responses were dangerously enabling, giving detailed info while ignoring the potential for harm. The ADL called this vulnerability a “critical flaw” in the AI ecosystem because open-source models can be downloaded and modified by anyone they are more prone to being used by malicious actors to spread hate, misinformation, or support violence.

 

Analyst Comments: While the ADL focused on antisemitism, the core vulnerability they expose applies far more broadly: these open models can be weaponized to produce hateful, discriminatory, or conspiratorial messaging against any religious, ethnic, or social group. Because many open-source models are freely downloadable, modifiable, and run without oversight, bad actors could easily fine-tune or prompt them to push anti-religious or other narratives.

 

Moreover, the danger isn’t limited to hateful text. As noted by several independent AI-risk researchers and security analysts, open-source models pose what’s known as a “dual-use” risk: they can be repurposed to generate instructions for illicit or dangerous behavior. This makes them a potential tool for actors seeks to plan violence, engage in terrorism, or facilitate hate crimes.

 

The core issue is that open-source AI is widely accessible but lacks reliable safety controls. Left unchecked, these models become enablers for both digital hate campaigns and real-world threats. Without robust governance, safety protocols, and accountability frameworks, the misuse potential of AI spans much wider than the antisemitic cases highlighted, touching virtually any form of prejudice or illicit behavior that faith leaders might be affected by.

 

OH: Thieves break into about 30 cars at Cleveland churches during Sunday services

 

Thieves broke into nearly 30 cars parked at two churches in Cleveland during Sunday services while congregants were inside worshipping. St. Stephen’s Church had about 20 vehicles targeted and, about two hours later at Open Missionary Baptist church, nine more were hit. Windows were smashed, personal items rummaged through, and several vehicles had belongings taken. Police are investigating, while members of the churches say the incidents have shaken their sense of safety.

 

Analyst Comments: This incident shows a broader issue: public-gathering spaces like Houses of Worship (HOWs) often overlook parking-lot security. However, those lots can be prime targets when many cars are parked and unattended during services. For faith-based communities, HOWs should consider treating the parking lot as a part of overall premises security.

 

A few effective precautions include:

  1. Lots being well-lit
  2. Encouraging attendees to lock their cars and hide valuables
  3. Installing visible deterrents like security cameras or patrols

 

More Faith-Based Stories

More Security-Focused Content

The FB-ISAO’s sponsor Gate 15 publishes a daily newsletter called the SUN. Curated from their open source intelligence collection process, the SUN informs leaders and analysts with the critical news of the day and provides a holistic look at the current global, all-hazards threat environment. Ahead of the daily news cycle, the SUN allows current situational awareness into the topics that will impact your organization.