<img height="1" width="1" style="display:none;" alt="" src="https://px.ads.linkedin.com/collect/?pid=6991466&amp;fmt=gif">
Skip to main content
Opinion

Ethical adoption of generative AI informed by human reasoning delivers significant benefits to intelligence analysis

Faced with an overwhelming growth in the type and volume of digital evidence, generative AI enables intelligence agencies and law enforcement teams to identify anomalies, track patterns and achieve insights far quicker than human analysts can. While these advanced tools can shorten investigations and speed up prosecutions, Jamie Caffrey, Portfolio Leader at i2 Group, explains why analysts must ensure their AI capabilities remain auditable and transparent, and remain aligned with investigative standards to maintain public trust.

In the intelligence and law enforcement worlds there are some hard truths to confront. Agencies’ budgets have been tightened and there is a limited number of trained analysts to respond to a growing mountain of digital evidence flowing from sources as diverse as smartphones, credit card transactions, social media and video footage.

Advances in generative AI in recent years, however, present agencies with an opportunity to expand the depth and reach of their intelligence gathering. Rather than replace analysts, as some fear, its advanced analytical tools can, in fact, enhance the analyst’s work and deliver better informed decisions. Insights from analysis that once took months to achieve can now take days.

Applied to intelligence workflows, generative AI’s natural language processing, machine learning, and image and audio analysis can absorb tasks accurately and speedily to capture and assemble critical insights across a myriad of data streams at scale, whether that is unstructured text, video, images or audio.

As important as this development is, the new capabilities that generative AI bring also raises questions about how these tools are used, especially where they can impact on people’s rights and public trust.

This is why senior leaders must steer the adoption of emerging AI tools in way that doesn’t expose their intelligence agency or law enforcement organisation to new legal, ethical or operational risks.

The overriding goal for any senior leader that greenlights the deployment of advanced AI tools is that they improve the analytical process. That means they must be mindful of the fact that AI-generated inferences can mask hidden biases or errors that ultimately could undermine an investigation’s integrity.

When a prosecution is brought, the courts must have confidence that the investigators who built the case can explain how their conclusion was reached and clearly demonstrate evidentiary rigour. Anything short of this risks the case collapsing because of public distrust in the process.

In the United States, there are already legislative moves requiring intelligence agencies to divulge the use of AI tools in the reports they compile.

Above all, human oversight must be paramount, not least to ensure that any intelligence insights gained from using the tech remain honest, transparent and unbiased. At the same time, the human analyst must continue to make the judgement calls that are so critical to bringing investigations to a successful conclusion.

In this respect, generative AI can enhance the analytical process by allowing investigators to ask complex questions in plain language. This is important because it minimises the need for specialised query skills while also helping to ensure that analysts can explain how they achieved their results from start to finish.

This is particularly relevant for investigations where agencies handle sensitive, albeit crucial, information which is stored across a multitude of disconnected data systems, and which are overseen by different departments or external agencies.

As analysts will know only too well, the controls that exist to restrict access to data shares between departments provide, on the one hand, an important security safeguard. However, on the other hand, these controls can frustrate efforts to build a complete picture of any nefarious activities being undertaken. At the same time, senior leaders may have political motivations for withholding certain intelligence.

Once again, generative AI has the potential to resolve these issues and enhance intelligence analysis by providing the means to share securely without undermining the classification rules in place, so long as senior leaders maintain oversight and ensure the intelligence gathered is defined by human-led reasoning. Only by directing collaboration at a strategic level can the technology deliver the game-changing capabilities it promises.

For the sceptics out there, the success that UK law enforcers have had in cracking down on drug traffickers operating across regional borders is proof that collaboration across different jurisdictions can deliver resounding results. Of course, when agencies can see the fruits of sharing data, they will feel more encouraged to collaborate.

As digital crime mounts, generative AI presents an opportunity for intelligence agencies and law enforcement organisations to remain one step ahead.

Encouragingly, we are also seeing an increasing number of digital natives entering the intelligence and law enforcement sectors who expect to work with advanced AI tools and can adapt as the technology evolves.

The real winners will be those organisations that can successfully balance investment in the analytical talent of their people while putting governance frameworks in place to manage the risks the advanced technology poses so that AI tools are deployed ethically.

Senior leaders that do all of this and foster cross-agency cooperation will be best placed to respond to the challenges that digital crime poses now and in the future.