Last summer we introduced Automated Leads, a transformative approach to threat detection designed to surface the subtle signs of an attack before it turns into a full-blown breach. It’s powered by CrowdStrike® Signal (distinct from SGNL) and delivered via the CrowdStrike Falcon® platform.
Since that launch, the goal has remained the same: to move beyond the limitations of traditional alerting and give analysts a head start on detecting the most sophisticated adversaries.
Today, we’re peeling back the curtain on how the new family of self-learning AI models that generates Automated Leads works, and announcing a powerful new capability to instantly isolate unusual processes and anomalous remote monitoring and management (RMM) tool usage that would otherwise be lost in the noise.
The Challenge: Why More Alerts Isn’t the Answer
Improving detection is a core driver for the CrowdStrike Advanced Research team, which is behind the development of the AI models powering Automated Leads. For years, the industry has followed a predictable cycle:
- Create a rule for a known malicious feature.
- Deploy it.
- Triage the resulting alerts.
- Tune out the high-volume noise.
The consequence? “Noisy” rules, which might actually trigger on real malicious activity, are suppressed because there are too many for human triage. Malicious activity can slip through the cracks.
On the Falcon platform, we see millions of indicators, or events that don’t quite reach the threshold of a traditional detection. In a complex environment, we might see 10,000 such indicators in a single hour. They are too numerous for a human to review, but with the right algorithmic approach, they are the key to finding the needle in the haystack.
How Automated Leads Works: Scoring and Correlation
The AI engine powering Automated Leads solves this by shifting the focus from individual alerts to entity-based scoring. Instead of treating every event as a binary “good” or “bad” alert, the engine assigns a score to every indicator and detection event. These scores are essentially an initial prioritization. The engine then links these events by entity (such as an endpoint).
When multiple positively scoring events occur on the same host, their scores are summed. This is best explained by visualizing how the engine views indicator occurrences over time:


