Rather like quantum computing, the rise of all tiers of AI has given us pause for thought when it comes to cybersecurity. As we gain more power to compute and automate, we gain an equivalent level of risk and vulnerability driven by malicious entities, spanning from hell-bent script kiddies to nation-state actors. Can we now use AI-for-good to keep us safe and create a new level of autonomous crime investigation services?
For years, the default focus of enterprise security has been defensive and digital in the most binary terms. Good cybersecurity practices would always feature a usual suspects beauty parade of endpoint detection tools, Security Information and Event Management (SIEM) services, firewalls, network monitoring layers, data loss protection, identity management and perhaps a good dose of Security, Orchestration, Automation and Response (SOAR).
All these technologies are designed to protect the perimeter and detect anomalies before they become breaches. It is a mindset shaped by a world where the primary threat was always a hacker probing for a gap in the system.
Today, in a world of AI, threats go beyond that point.
Thomas Drohan, chief strategy officer at Clue Software says that what has changed is not just the volume of threats, but where they operate: inside routine workflows, trusted relationships, and everyday decisions.
Europol’s EU-SOCTA 2025
“This evolution is taking place alongside a geopolitical shift with far‑reaching consequences for every large organisation operating in Europe,” said Drohan, speaking to Techzine Global this month. “Europol’s EU-SOCTA 2025 warns that serious and organised crime is evolving at an unprecedented pace, exploiting technological developments and geopolitical instability to expand its reach and deepen its impact.”
This new era sees online fraud schemes alone now become unprecedented in size, variety, sophistication and reach; they are expected to outpace all other forms of serious and organised crime. Meanwhile, nearly 10,000 victims of human trafficking were registered across the EU in 2024, facilitated by criminal networks that Europol describes as operating with the structure and reach of global enterprises.
“Enterprises are now on the front line of this, with organised crime increasingly sitting within corporate systems, exploiting high‑volume, routine processes where abnormal activity is hard to distinguish from business as usual,” explained Drohan. “Whether it’s industrial-scale fraud, human exploitation networks, or nation-state proxies, criminal groups are professionalising and automating, often using AI to scale deception and evade detection.”
Drawing on his more than two decades in the intelligence and investigations business, Drohan reminds us that criminals rapidly gravitate to where an organisation is vulnerable. Systems and IT infrastructure are one part, but increasingly it is people who are most exposed – either coerced or colluding, granting access to what is of value.
In this environment, the goal is not simply to block and detect. It is to build intelligence, coordinate disruption, and secure justice outcomes.
Investigation platforms enter the core stack
Work inside Clue Software sees investigation platform technologies now positioned as an integral element of the core IT stack. Why? Because the technology stack required to tackle serious and organised crime does not need to be complicated.
- At the top sits data-driven anomaly detection.
- Below that should sit a system that transforms these anomalies into a coherent intelligence picture, enabling teams to coordinate disruption, construct robust evidence cases and produce strategic assessments that can genuinely reshape organisational priorities.
What we may be seeing is a reality where this second layer remains chronically underserved.
According to Drohan and team, “Enterprises invest heavily in detection, but very little in what comes next. Alerts accumulate. Suspicions are logged. But the investigative work that determines whether harm is prevented often falls back onto fragmented teams using shared folders, spreadsheets, and email chains; tools incapable of meeting the evidential, legislative, or operational requirements of serious crime investigations. The point of investigation is not simply resolution, but learning; feeding insight back into control processes and decisions to reduce future harm.”
Critically, investigative environments are legally consequential. We might suggest then that compliance cannot be an afterthought. Evidence must be handled in ways that preserve integrity, cases must meet courtroom standards, intelligence must carry clear provenance, decisions must be auditable, and workflows must reflect legislation, not workaround it.
More than a software challenge?
Many agree that this is not simply a software challenge. It requires expertise in investigative technique, evidential integrity, multi-agency coordination and legislation-aware workflows that govern how cases are built.
Equally important is access to codified best practice. Organised crime groups are sophisticated, entrepreneurial, and increasingly tech-savvy, sharing tactics, adapting quickly, and learning from each other. Organisations will only get ahead if they treat investigation capabilities in the same way, encoding best practice into their workflows, sharing intelligence across teams and sectors, and continuously updating their approach as new threats emerge.
“The moment organisations strengthen this investigative layer, a new challenge emerges: the sheer volume and complexity of information that teams must process. Interviews, emails, documents, financial records, OSINT, images, cross-border intelligence, internal reporting; all of it arriving at speed and in unstructured formats. At this stage, AI becomes genuinely transformative,” explained Drohan.
The foundational requirement is turning unstructured inputs into clean, consistent, provenance-rich records that can withstand scrutiny. Meaningful AI outcomes depend entirely on this. Without structured, trustworthy data, AI delivers noise, not insight.
This is why Drohan and team think that AI cannot sit on the margins of investigation. It must be embedded inside trusted, legislation-aware workflows, assisting with triage, surfacing relevant entities, detecting signals, categorising content, and making connections across datasets.
When integrated in this way, tasks that once took hours can be completed in seconds, allowing teams to process far more information without increasing headcount.
Auditable, defensible, transparent processes
“Speed alone is not enough. Investigative work demands that AI operate within auditable, defensible, transparent processes, where humans remain in accountable roles. What organisations need is augmented human decision-making, technology that accelerates the work while ensuring people retain authority over outcomes,” asserted Drohan.
The path forward is moving from isolated experiments to operational capability. That means applying AI to the specific friction points that slow investigations: triaging information, linking evidence and preparing material for human review.
When AI is embedded into these repeatable workflows, it becomes a multiplier, reducing noise, sharpening focus, and enabling investigators to work with greater speed and confidence, without compromising evidential standards. Used well, AI helps teams focus effort where risk, harm and exposure are highest – enabling targeted disruption rather than blanket response.
A new era of enterprise security?
Drohan says that organised crime has moved beyond the perimeter. It now exploits supply chains, operational processes and, most often, people. As criminal groups professionalise and adopt AI, enterprises must re‑engineer their investigative capability to match them.
The next era of enterprise security will not be defined by more alerts or more detection tools. It will be defined by how effectively organisations can investigate: how quickly they can triage information, how reliably they can build intelligence pictures, how confidently they can coordinate disruption, and how defensibly they can operate within legislative boundaries.
“AI also offers modern investigation teams the opportunity to get further upstream of threats, so that their investigations can be targeted at getting to the harm earlier and either preventing it from happening or disrupting it earlier,” he said. “When deployed responsibly, safely and inside compliant workflows, AI will be the force multiplier that enables cross-functional teams and decision makers to keep pace with the speed and scale of modern organised crime.”
Close the loop, don’t be a nincompoop
Drohan concludes by saying that investigation is not the end state. Its value lies in learning: using investigative insight to drive concrete recommendations back into the wider business, strengthening controls, adjusting processes, and preventing future harm. This closes the loop between sensing, making sense and acting.
Clue Software’s central proposition is that the organisations that succeed here will be those that treat investigation capability as a strategic asset: embedding best practice, institutionalising knowledge, integrating AI where it accelerates expertise and ensuring every automated step strengthens evidential integrity rather than undermining it.