

How Law Enforcement Agencies Can Use AI for Early Intervention and Officer Support
As artificial intelligence (AI) becomes increasingly integrated into every aspect of our lives,
the public safety sector is no exception. In a recent panel hosted by the FBI National
Academy Associates (FBINAA) and Coreforce, law enforcement and technology experts
gathered to explore how AI can empower departments through early intervention and
officer wellness tools—while maintaining privacy, security, and public trust.
The Burden of Review Fatigue
One of the most significant challenges law enforcement faces today is managing and
analyzing the vast amount of data generated—especially from body-worn cameras. John
Boyd, Law Enforcement Relations Manager at Coreforce, opened the session by acknowledging
the sheer volume of tasks officers must juggle and how current review systems are often
unsustainable.
"Only 7 to 10% of collected footage is ever reviewed,” said Simon Araya, CTO of Coreforce. “AI
allows agencies to analyze 100% of incidents—not just the ones flagged by someone."
How AI Can Help Now
The panelists highlighted real-world AI tools that are already lightening the load:
• Audio transcription with keyword detection: This allows supervisors to identify
high-risk interactions without manually reviewing every second of footage.
• Sentiment analysis: AI can determine whether officer-citizen interactions were
positive, neutral, or negative—helping flag behavioral trends.
• Object and event detection: Automatically surfacing footage with critical moments
like use-of-force events or weapons display.
Crucially, these tools are now accessible to departments without dedicated data teams.
“The tools are intuitive enough for supervisors and peer officers to use directly,” said Jamie
Roush, CEO of CRH Analysis Consulting.
Start Small: You’re Already Using AI
For agencies that feel overwhelmed by the term “AI,” Roush offered a reassuring insight:
“Most departments are already using AI—they just may not realize it. Technologies like
license plate recognition, facial recognition, and even body-worn cameras often have
machine learning built in.”A recommended first step? Take inventory of your current systems and identify what AI
capabilities already exist. From there, departments can begin applying those tools more
effectively.
Guardrails Are Non-Negotiable
Perhaps the most urgent discussion centered around data privacy and the responsible use
of AI. Gerard Gallant, AWS CIS Program Lead, warned against using public-facing
generative AI tools (like ChatGPT or Google Gemini) for sensitive data such as body camera
footage or criminal histories.
“The moment you input data into a public model, it’s gone. There is no delete button,” he
emphasized.
Instead, law enforcement must:
• Use private, secure systems with encryption.
• Work only with trusted vendors who understand Criminal Justice Information
Services (CJIS) compliance.
• Ensure data is never used to train public models or exposed to third parties.
What a Supportive Early Intervention System Looks Like
Jamie Roush described a model where AI integrates officer performance data, video
footage, and complaint records to flag potential issues early—before they escalate.
“This isn’t about punishment—it’s about support,” she said. “It enables coaching,
mentoring, and wellness interventions that promote long-term officer health and
professionalism.”
The result? Departments that better align with the 21st Century Policing goals of policy &
oversight and officer wellness & safety.
Transparency with the Community
With growing community awareness around AI, panelists agreed on the importance of
proactive communication.
“Agencies can’t operate in a vacuum,” said Roush. “Explain to your community how AI
helps—not replaces—officers, and how it protects both them and the public.”
She emphasized that many citizens are already familiar with AI in their own lives. Engaging
them early builds trust and reduces suspicion.
Final Takeaway: Policy Before Perfection
Above all, panelists urged agencies not to delay creating AI policies. Even a basic policy
that distinguishes between public and private AI tools can protect your agency from
accidental breaches and reputational harm.
“Don’t wait for perfect clarity to write your policy,” Roush advised. “AI is moving too fast.
Your policy should be a living document, reviewed often and updated as technology
evolves.”
In Summary:
• AI can dramatically improve review efficiency and officer support.
• Tools are available and accessible—even for small departments.
• Public-facing models pose a major security risk—avoid them.
• Transparent communication and proactive policy are critical.
As AI evolves, law enforcement agencies that adopt it responsibly will be better equipped
to enhance accountability, streamline operations, and safeguard both officers and the
communities they serve.
As artificial intelligence (AI) becomes increasingly integrated into every aspect of our lives, the public safety sector is noexception. In a recent panel hosted by the FBI National Academy Associates (FBINAA) and Coreforce, law enforcement and technology experts gathered to explore how AI can empower departments through early intervention and officer wellness tools—while maintaining privacy, security, and public trust.














