SANS
SANS
Small/Medium
Government (1,001–5,000)
Medium
(5,001–15,000)
Banking and
finance Medium/Large
(15,001–50,000)
Cybersecurity Large
(More than 50,000)
Each gear represents 10 respondents.
Each building represents 10 respondents.
Ops: 77
Ops: 117
Ops: 62 HQ: 4 SOC manager
HQ: 20
HQ: 2 or director
What is your estimated annual budget for new hardware, software licensing and support, human capital, and any additional costs?
40%
38.2%
30%
20%
10.4%
10%
7.8% 7.8% 7.3%
6.8% 6.1%
5.6%
3.3% 3.0%
1.8% 1.8%
0%
Unknown Less than $100,001– $250,001– $500,001– $750,001– $1 million–$2 $2 million– $4 million– $8 million– $16 million– Greater than
$100,000 $250,000 $500,000 $750,000 $1,000,000 million $4 million $8 million $16 million $48 million $48 million
update the answer options in 2024, because Searching our SIEM 248
“anomalous activity” doesn’t get to the heart Alerts and reports from our
third-party intelligence providers
216
of the question we asked, “How does the SOC
0 50 100 150 200 250 300 350
know there’s a problem?”
Figure 5. Response Triggers
We hope you’ve been reading the SOC Survey since it was first created in 2017. Since you
might not remember the charts from last year,
What is the primary approach you use to decide what data
let’s look at a few things that changed from to ingest into your SOC?
previous years.
Everything goes in SIEM/syslog 152
The first one we’ll explore is a big one. That Only high-priority systems
(selection based on system type)
40
“cloud based” now exceed “single central” SOC Highly selective use case/
detection engineering
38
as the most common architecture. The trend
Unknown/Unsure 30
of moving to the cloud has been observed
Data adjustment applied prior to
in IT for years and is now embedded in SOC ingesting data to apply selection criteria
25
architecture. Other 8
This represents an increase from 2023 when the percentage was 29% of
600 answers, this year it’s 38% of 403 answers for the same question.
We didn’t ask the question prior to 2023.
What is the primary approach you use to decide what data
to ingest into your SOC?
Single Central Architecture
Greater Ratio Single, central SOC 242
There are a few ways to build out your SOC. Multiple, hierarchical SOCs 59
From 2023 to 2024, the percentages of full or partial production or in the midst of
implementing didn’t change much. But look at the drop in
Analysis: AI or machine learning
planned implementations from 2023 when 21% said it was
2024 2023
planned to 2024 when 11% said it is planned. From this picture 40%
it looks like the people who were going to do it have already 36.4%
35%
done it, and the rest have decided to pass.
In 2024 34% indicated “We’re not using any TLS interception to see inside HTTPS or
other encrypted communications” whereas in 2023 only 25% indicated the same.
In 2023 38% indicated “We have TLS intercept implemented, some categories of
websites are excluded from intercept due to company policy and/or user privacy
considerations.” In 2024 that percentage dropped to 34%.
SOCs are losing visibility into the traffic leaving the network, which likely means
more reliance on the endpoint protection tools.
Average Tenure Increasing What is the average employment duration for an employee
in your SOC environment (how quickly does staff turnover)?
Staffing is always a concern for the SOC. It
takes skilled analysts to perform well under 40%
and retraining. See Figure 10 depicting this Figure 10. Employment Duration
inflection. We’ll keep an eye on it for 2025.
Retention What is the most effective method you have found to retain employees?
What has been compelling people to stay? The 35%
survey asks how to retain employees. We don’t
cover macro-economic conditions, but those 30%
could also play a factor. See Figure 11 to see that Money
meaningful work took the top spot this year, but 25% Career progression
the reported differences have reduced.
Meaningful work
20%
15%
2022 2023 2024
technology based on the top of each category. Let’s Host: User behavior and entity monitoring 2.60
necessary that email would be unusable without it. Plus, Net: Network Detection and Response (NDR) 2.48
Analysis: Threat intelligence (open
it would likely be criminally negligent to run an email source, vendor provided) 2.47
Net: SSL/TLS traffic inspection 2.45
server with no filtering in place. Or maybe criminally
Analysis: External threat intelligence
2.44
profitable, but offering bulletproof hosting and no-trace (for online precursors)
Net: Web proxy 2.42
mail servers is the other side of the cyber industry.
Host: Data loss prevention 2.41
Production (partial systems) top technology is Analysis: Analysis: Threat hunting 2.41
Threat hunting with 61 responses. This is aligned with Host: Application whitelisting 2.39
Analysis: digital asset risk
the aforementioned increase in threat hunting being analysis and assessment 2.35
Analysis: E-discovery (support legal
2.33
driven by third party provided hunting tools. This is requests for specific information collection)
Analysis: Threat intelligence platform (TIP) 2.32
easy to deploy into production, but a challenge to
Analysis: SOAR (Security Orchestration,
Automation, Response) 2.23
accomplish full coverage because of visibility issues.
Net: Packet analysis (other than full PCAP) 2.21
These issues may stem from inadequate authorization
Net: NetFlow analysis 2.17
or mandate. But it may also simply be a challenge Analysis: Frequency analysis
2.15
for network connections
to provide effective hunting across all systems. It’s Net: Full packet capture 2.13
trivial to say, go look for a hash on a computer. Doing Net: Malware detonation device
(inline malware destruction) 2.11
Net: Deception technologies
so across tens of thousands of globally deployed such as honey potting 2.10
systems on commodity internet with varying bandwidth Analysis: AI or machine learning 1.99
Analysis: AI or machine
1.79
becomes a substantial challenge. learning- Generative (GPT)
0 0.5 1 1.5 2 2.5 3
most accurately represents your method of Through full integration between our 39
physical badging system, authentication 25
correlating assets to responsible system owner system, and our SIEM/workflow tool 17
or user for servers and user endpoints in Fully automated through our user
56
authentication system (such as Active
52
your environment.” Figure 17 shows that the Directory, IPAM), which is fully integrated
53
into our SIEM/monitoring workflow tool
most common method is mostly automated
Mostly automated, but must fall 78
augmented by manual efforts. A surprising back to manual log inspection 68
and correlation sometimes 66
number have a manual effort each time.
Manual effort each time (manually looking 64
A surprising number have integration with up IP addresses, comparing against 39
directories, privileged user access logs, etc.) 49
physical badging systems and into the SIEM!
0 10 20 30 40 50 60 70 80
and the lowest was 378 total (purple teaming). Digital forensics 390
Security road map and planning 390
Basically, everyone answering performs all the Threat research 390
capabilities in some way. For the lowest count Threat hunting 389
Compliance support 388
capability, only 25 of the people, or about Security tool configuration, integration,
388
and deployment
6%, don’t perform it. To illustrate this, look at
Data protection 385
Figure 18. Red-teaming 380
Purple-teaming 378
What model(s) are you using to determine what What activities are part of your SOC operations?
capabilities your SOC needs? What activities have you outsourced, either totally or in
Select all that apply. part, to outside services through a managed security service
provider (MSSP) or as a result of hosting in the cloud?
300 273
Outsourced
50 100 150 200 250 300 + Both
250 Pen-testing 276
274 Red-teaming 237
200 98 Purple-teaming 214
Digital forensics 199
150 Threat research 184
Alerting (triage and escalation) 181
100 27 Incident response 171
48
Security monitoring and detection 170
50 SOC maturity self-assessment 161
Threat hunting 155
0 Vulnerability assessments 150
NIST-CSF MITRE SOC-CMM SOC-Class Other
ATT&CK SOC architecture and engineering
129
(specific to the systems running your SOC)
Figure 21. Capability Basis Security tool configuration, integration,
127
and deployment
Remediation 116
Compliance support 110
Data protection 94
Security architecture and engineering
78
(of systems in your environment)
Security road map and planning 68
Security administration 62
Most SOCs are 24x7. Only 20% of 402 answered “No” to Q3.24, “Does your SOC operate
24/7?” Of the 314 operating 24x7, 36% are in-house only, 16% are outsourced only, and
26% are mixed internal and outsourced. Of these 314, 49% indicated there’s a “follow
the sun” model in place.
• 76% of 403 responses to Q3.26 indicated SOC staff can work remotely.
• Regarding the IT/OT split, 68% of 397 acknowledged there was some OT
component. 10% of these said separate monitoring systems were used but
the same staff was used. 29% said separately, and 30% said together with IT
resources. This is from Q3.30 with 397 people answering the question.
SOC Staff
Section Summary: Staff with analytical skills on EDR and vulnerability remediation are
in demand; workload calculation per analyst is typically based on historical ticketing or
SIEM data.
We mentioned earlier that the most popular SOC staff size is Table 1. Top SOC Skills
a consistent 2-10. So, let’s dig in to some other details on staff.
Analysis: SIEM (security information and event manager) 138
The overall top three most important technologies for new hires Host: Endpoint or extended detection and response 98
to be familiar with are SIEM for analysis, host based EXDR, and Host: Vulnerability remediation 73
Vulnerability remediation. See Table 1.
Most SOCs are trying to figure out what the right workload is per analyst. So, we asked
the hard question, “how you calculate per-analyst workload.” Figure 22 shows that
most people use the ticket data for start and stop
time on a ticket. While this can have some error if Select the best description of how you calculate per-analyst workload.
ticket opening and closure isn’t done consistently We base it on the ticketing data when
an analyst starts and closes a ticket.
40.1%
between analysts, it’s a good approximation of level We use SIEM data to calculate how many
alerts are present and indicate how much 29.8%
of effort. time an analyst has to work each ticket.
Our service level agreements dictate
Presumably there’s some further calculations to how quickly we must review content,
and we allocate that much time per
14.6%
gauge busy time, optimize for expensive work, and analyst per shift to make a decision.
looks for per-analyst discrepancies to address Other 15.4%
skills, knowledge, and training deficiencies as well 0% 10% 20% 30% 40%
as varying performance levels. Or, probably not to Figure 22. SOC Workload
all of that. “Other” answers aren’t worth a full word cloud. There are primarily “we don’t
do this” type answers, “outsourced it’s the MSP’s problem” type answers, and a variety
of SIEM and other variations or nuanced tuning on the offered answers.
Threat intelligence is supposed to be used to gain tactical and strategic advantage over
the threats to our environment. In Q3.21 we asked a “check all that apply” type question
on how threat intelligence is being used, and the top response with 194 affirmations was
for “Incident Response” follow closely by “Threat Hunting” with 191 responses out of 276
respondents to this question.
We also wondered about the analysis work in threat intelligence, since there are no
clear parameters of accuracy and the data pieces can fit together in multiple seemingly
meaningful ways, like a mosaic. The top “used frequently” method for CTI analysis was
“Intuitive or experienced-based judgement” with 152 responses out of 263 answers. Threat
modeling was the top “Used Occasionally” method with 123 of 163 answers.
Time based metrics are great, when Avoidability of incident (could the
incident have been avoided with 87 59 54
paired with quality metrics. We common security practices in place?)
Threat actor attribution (using
have the list ranked based on total threat intelligence) 85 60 53
in Figure 23.
Number of incidents closed in one shift 84 54 55
Conclusion
Threat actor attribution (using
threat intelligence) 73 35 33 40
common than it has been in the Losses accrued vs. losses prevented 60 38 25 39
past.
Monetary cost per incident 58 34 28 38
Changes from past years: a single, 0 50 100 150 200 250
central SOC is more common; Figure 24. Metrics to Constituents
vendor-tool based threat hunting is more common; fewer SOCs report planning to deploy
AI/ML; people express lower grade for AI/ML than last year; TLS inspection is decreasing;
employee duration of employment is increasing; and career progression is more
important for retention.
Similar to past years, the internal SOC is mandatory to use and the NOC/SOC are not
integrated but coordinate.
Budget of SOC isn’t known to most respondents to the survey. Metrics are provided by 67%
of respondents, and the most common metric is number of incidents handled.
Capabilities of the SOC are very consistent across almost all respondents. Frequently
outsourced items are pen-testing, forensics, threat-intel, and alert triage.
The most commonly reported SOC size is 2–10 people. The highest cited barrier is lack
of automation. EDR/XDR is the most common initial indication of a problem. Most SOCs
are 24x7, about half are follow-the-sun and most allow work-from-home. 68% have some
OT component to monitor, with about equal portions monitoring IT/OT separately as
converged. Threat intel is used for incident response and hunting which is commonly
based on intuition.
47 listed technologies were graded and EXDR is top GPA technology still, and AI/ML is
lowest. Most satisfied with endpoint-based incident response capability but visibility and
asset correlation continue to be a challenge.