Security Operations

CTEM Explained

7 min read·Updated 2026-04-20
TL;DR

CTEM (Continuous Threat Exposure Management) is a framework for finding, prioritising, and reducing the security exposures that actually matter to your business. It came from Gartner in 2022 and has become the de facto vocabulary for modern exposure management. The key shift it asks for is moving from "what vulnerabilities do we have" to "what could an attacker actually do, given everything we have."

Where CTEM came from

Gartner published the CTEM framework in 2022 as a response to a problem most security teams already knew about. Vulnerability management programmes were producing more findings than anyone could realistically fix, and the prioritisation criteria most teams used (CVSS scores) had little correlation with actual attacker behaviour.

The framing Gartner chose deliberately moved away from the language of vulnerability management. Instead of "vulnerabilities", they used "exposures". Instead of "scanning", they used "discovery". Instead of "patching", they used "mobilisation". The point was not to invent new words for the sake of it. It was to break the assumption that a vulnerability scanner output was the right place to start thinking about security exposure.

In the years since, CTEM has gone from a Gartner concept to a category that vendors compete in, an analyst conversation that buyers expect to have, and a programme structure that mature security teams use to organise their exposure reduction work. By 2025 it was a standard fixture in CISO conversations.

The five stages

CTEM is a continuous cycle with five stages. Each one is a different question.

Scoping (what do we care about?)

Before you can manage exposure, you need to define what you are managing. Scoping answers "which assets, business processes, and threats are in scope for this cycle?"

This is not the same as just listing every server you own. Scope is about prioritisation. The question is: which parts of the business, if compromised, would cause meaningful harm? An employee benefits portal that contains no PII probably matters less than the customer-facing payment system. A test environment that has no production data matters less than the production environment.

Most organisations get this wrong by trying to scope everything at once. Better practice is to scope around a specific business process or asset class for each cycle, then iterate.

Discovery (what do we have, and what does it look like?)

Once scope is defined, discovery finds everything in that scope. This includes:

  • Internal infrastructure the security team already knows about
  • Shadow IT that someone in a business unit set up without telling anyone
  • Cloud accounts and SaaS services that may have escaped formal asset management
  • External attack surface (the topic of its own shelf elsewhere in this knowledge base)
  • Software supply chain dependencies, including the ones in your own builds

Discovery is harder than it sounds. Most enterprises have no single source of truth for what they own. CTEM accepts this and asks for ongoing discovery, not a one-off audit.

Prioritisation (what actually matters?)

This is where CTEM differs most sharply from traditional vulnerability management.

A traditional VM programme prioritises by CVSS score. CVSS measures the theoretical severity of a vulnerability in isolation. It does not know whether the affected asset is internet-facing, whether the vulnerability is being actively exploited in the wild, whether your specific configuration is exposed, or whether an attacker who exploited it could actually reach anything important.

CTEM prioritises differently. The right question is: "If an attacker exploited this, what would happen?" That depends on:

  • Exploitability. Is there a working exploit? Is it being used? (CISA's KEV catalogue, EPSS scores, and threat intel feeds answer this.)
  • Exposure. Is the asset reachable? From where? By whom?
  • Asset criticality. Does this asset matter to the business?
  • Compensating controls. Is there a WAF, segmentation, or other control already mitigating it?
  • Attack path implications. Could this exposure chain to something more serious?

The honest version of this is: most CVEs do not need to be fixed urgently, and a small number of low-CVSS findings in the right place are critical. CTEM gives you a structure to act on that.

Validation (would the prioritisation hold up against a real attacker?)

Validation is the stage that distinguishes CTEM from earlier approaches.

The idea is that prioritisation on paper does not equal exposure in practice. A finding that looks bad might be mitigated by a control you did not know about. A finding that looks low-priority might chain into a serious attack path you missed.

Validation tests this empirically. Common methods include:

  • Breach and attack simulation (BAS). Automated platforms that run real attacker techniques against your environment to see what works.
  • Red team exercises. Adversary-style engagements that test the full kill chain.
  • Continuous automated red teaming (CART). A newer category that combines BAS-style continuous testing with red-team-style attack chaining.
  • Threat-led penetration testing (TLPT). Regulator-mandated in some sectors. Frameworks like TIBER-EU, iCAST, and CBEST fall into this category.
  • Purple teaming. Defenders and attackers working together to validate detection and response.

Whatever method, the goal is the same: confirm that your prioritisation reflects what an attacker would actually do, and find the gaps that paper-based scoring missed.

Mobilisation (now actually fix things)

The last stage is the one CTEM gets credit for naming honestly. Most security frameworks treat the actual fixing as someone else's problem. CTEM treats it as a first-class part of the cycle.

Mobilisation includes:

  • Remediation tracking with SLAs. Specific findings get assigned to specific owners with specific timelines.
  • Compensating controls. Where a fix is slow or impossible, what else can reduce the risk?
  • Communication. Translating technical findings into business-relevant language for executives and asset owners.
  • Closure verification. Re-testing to confirm a fix actually fixed the thing.

The reason this stage exists is that traditional VM programmes routinely produce reports nobody acts on. Mobilisation is the part where the security work meets the rest of the organisation.

How CTEM differs from what came before

It is worth being concrete about what changes when an organisation moves from traditional VM to CTEM.

Traditional VMCTEM
Vulnerability-centric (CVE list)Exposure-centric (attack paths)
Quarterly or monthly scan cadenceContinuous
CVSS-based prioritisationExploitability and business context
Output is a vulnerability reportOutput is a remediation plan with owners
Validation via annual pen testValidation continuous (BAS, CART, purple teaming)
Scope is "all known assets"Scope is "this business process and these threats"
Fix-everything mindset (impossible)Fix-what-matters mindset (achievable)

This is a simplification, of course. Mature VM programmes do many of the right-side things already. The CTEM framework gives them a vocabulary and structure that the rest of the organisation can engage with.

Where it tends to break down

CTEM is not magic. The places it tends to fail in real organisations are:

  • Scoping at the wrong level. Trying to scope the entire estate at once turns into the same firehose VM produced. Scope smaller.
  • Prioritisation that still defaults to CVSS. Without active threat intelligence and asset context, prioritisation reverts to CVSS by default. Investing in context matters.
  • Validation that is run-once. A red team exercise once a year does not provide continuous validation. The intent of CTEM is ongoing, not annual.
  • Mobilisation that gets blocked by other teams. Security can identify and prioritise, but actually fixing things requires cooperation from infrastructure, development, and business owners. CTEM works only if those relationships do.
  • Tooling-first thinking. Buying a tool and calling it CTEM does not implement CTEM. The framework is about how you work, not which products you use.

Where to start

If you are a security team being asked to "implement CTEM", the realistic starting point is:

  1. Pick one business process or asset class. Not the whole estate. Maybe customer-facing payment systems. Maybe the executive team's communications. Maybe a specific cloud account.
  2. Run the cycle once on that scope. Discover what is in scope. Prioritise based on real exposure, not just CVSS. Validate the priorities against a real test (BAS, a focused red team exercise, or even a structured tabletop). Mobilise on the top findings.
  3. Capture lessons learned. What worked? What did not? Where did the prioritisation diverge from the validation?
  4. Expand scope on the next cycle. Either go deeper on the same area or pick the next priority.

This is incremental and slower than buying a "CTEM platform" and declaring victory. It is also the only approach that consistently produces results.

A note on the term itself

You will hear "CTEM" used in three different ways in conversation:

  • The framework (Gartner's five-stage cycle described above)
  • A category of tools (vendors who position around exposure management)
  • A programme (an organisation's actual implementation)

These are related but not the same. A vendor selling "CTEM" sells a tool. An organisation doing CTEM is running a programme. The framework is what binds them together.

If you are evaluating tools, ask which of the five stages they actually support and which require something else. Most tools cover discovery and prioritisation well. Validation and mobilisation are where the gaps tend to be.

ScruteX is a CTEM platform built around the five stages. See how it works in practice.

Learn more