Security Operations

Risk Scoring and Vulnerability Prioritisation

8 min read·Updated 2026-04-26
TL;DR

CVSS scores a vulnerability in isolation, which means it tells you almost nothing about whether you should fix it today, next week, or never. Real prioritisation combines exploitability signals (EPSS, CISA KEV), asset context (where the thing lives, what it touches), and exposure (is it actually reachable). The teams that get this right fix the small set of issues that attackers are actually using and stop chasing the long tail of theoretical 9.8s.

What it is

Vulnerability prioritisation is the process of deciding which findings get fixed first, which get fixed eventually, and which get accepted as residual risk. It sounds straightforward. It is not, because the inputs to that decision are messy and most organisations rely on a single number (the CVSS score) that was never designed to answer the question.

A modern prioritisation approach combines multiple signals: how severe is the vulnerability in theory, how likely is it to be exploited in the real world, is it being exploited right now, where does the affected asset sit, and what would actually happen if it were compromised. The composite is what produces a useful priority. Any single signal in isolation produces noise.

Why it matters

Most enterprise vulnerability scanners produce tens of thousands of findings on any given month. A CVSS-only prioritisation typically marks several thousand of those as Critical or High. No security team has the capacity to fix several thousand things, so what gets fixed becomes whichever findings happen to land in front of the right person at the right time. That is not a programme. That is theatre.

The cost of bad prioritisation is concrete:

  • Real exposures stay open. A CVSS 6.5 on the customer database server can be more dangerous than a CVSS 9.8 on an isolated test box, but generic prioritisation puts the 9.8 first.
  • Patching teams burn out. Asking a team to fix a thousand Highs every month produces missed SLAs, gaming of the metrics, and eventually an adversarial relationship between security and infrastructure.
  • Audit conversations get harder. "We fix all Criticals in 30 days" is easy to claim and impossible to do, so the answer in the audit is either a lie or a backlog of overdue items.
  • The wrong incidents happen. Real intrusions almost always trace back to a known vulnerability that was deprioritised because the score did not look bad enough in isolation.

The point is not that CVSS is useless. CVSS is a reasonable measure of theoretical severity. The point is that severity in isolation is not enough to make a fix decision.

The limits of CVSS

CVSS (currently at version 4.0 with most tools still on 3.1) scores a vulnerability on a 0 to 10 scale based on attributes of the vulnerability itself: attack vector, complexity, privileges required, user interaction, scope, and impact on confidentiality, integrity, and availability.

CVSS has structural problems for prioritisation:

  • It scores in isolation. CVSS knows nothing about whether your specific server is internet-facing, whether the affected service is even running, or whether a WAF is in front of it. A 9.8 on the NVD page might be 4.0 in your environment.
  • It ignores exploitation reality. CVSS does not know whether an exploit exists or whether attackers are using it. A theoretical 9.8 with no working exploit is less urgent than a 7.0 being weaponised in active campaigns.
  • The scale compresses badly. Roughly 60 percent of CVEs score between 7.0 and 9.0. When most things are High, the High designation stops carrying signal.
  • Temporal and Environmental metrics are rarely used. CVSS supports modifiers for exploit maturity and your specific context. Almost nobody applies them, because doing so manually for thousands of CVEs is not feasible.

CVSS 4.0 added more granular threat metrics and clearer environmental scoring. Adoption has been slow, and the underlying problem (severity is not priority) remains.

EPSS as a predictive signal

The Exploit Prediction Scoring System (EPSS) is a probability score, maintained by FIRST, that estimates the likelihood a CVE will be exploited in the next 30 days. Scores range from 0 to 1 (or 0 percent to 100 percent).

EPSS uses observed exploitation activity, exploit code availability, vendor and product information, and a machine learning model trained on historical exploitation data. The output is a daily-updated probability that this specific CVE will see exploitation in the wild in the near term.

What this changes:

  • A CVSS 9.8 with EPSS 0.001 (one in a thousand chance of exploitation) is much less urgent than a CVSS 7.5 with EPSS 0.95.
  • EPSS gives you a meaningful tail. Most CVEs have very low EPSS because most CVEs are never exploited. The high-EPSS subset is small (typically a few percent of all CVEs at any given time) and is the set worth focusing on.
  • EPSS scores update daily. A CVE that scored 0.05 last month might be 0.85 today because a new public exploit dropped. The signal is dynamic in a way CVSS is not.

EPSS is not perfect. It is a prediction, and predictions miss things. But as a filter to add on top of CVSS, it dramatically narrows the urgent list.

CISA KEV as the definitive "actively exploited" list

The CISA Known Exploited Vulnerabilities (KEV) catalogue is maintained by the US Cybersecurity and Infrastructure Security Agency. It lists CVEs that have been confirmed exploited in the wild, with evidence.

KEV is binary: a CVE either is on the list or is not. There is no score. The implicit message is that a KEV-listed CVE has been used by real attackers, the evidence has been validated, and federal agencies are required to remediate within a fixed timeline (typically 14 to 30 days from listing).

What this means in practice:

  • KEV is the most reliable single signal for "is this being exploited right now?"
  • Any KEV-listed CVE on an internet-facing asset should be treated as urgent regardless of its CVSS score.
  • KEV is conservative. It only lists CVEs with confirmed exploitation evidence, which means the list is smaller than the set of actually exploited CVEs. Things on KEV are a high-confidence subset, not the universe.
  • The full KEV catalogue (around 1,200 CVEs as of 2026) is small enough to be a manageable focus for any team.

For most organisations, the rule of thumb is simple: 100 percent of KEV-listed CVEs on internet-facing assets should be remediated in 14 days. Then everything else gets prioritised by composite risk.

Asset criticality and exposure

The third leg is context. The same CVE on two different assets has different priority because the assets matter differently and have different exposure profiles.

Asset criticality is a property of the asset itself: how important is it to the business?

  • The customer database is critical because compromise affects revenue and obligations.
  • The CI/CD signing infrastructure is critical because compromise could backdoor your products.
  • A developer's laptop with VPN access to production is more critical than the laptop alone suggests, because of what it can reach.
  • An isolated dev box with synthetic data is much less critical than its CVSS scores would suggest.

Exposure is about reachability:

  • Internet-facing. Anyone in the world can attempt the exploit.
  • Internal-only behind authentication. Only authenticated users can attempt the exploit.
  • In a segmented zone. Only specific paths reach the asset, narrowing the attacker pool further.
  • Air-gapped. The vulnerability is essentially academic until something else fails.

Combining criticality and exposure changes prioritisation dramatically. A CVSS 6.5 on a critical, internet-facing system can outrank a CVSS 9.8 on a low-criticality, internally segmented one.

Attack path implications

A finding that looks low priority in isolation can become critical when chained with another finding.

Examples:

  • A read-only credential exposure on a dev system that happens to share secrets with production becomes a production exposure.
  • A medium-severity SSRF that can reach the cloud metadata service becomes a path to cloud admin credentials.
  • A low-severity information disclosure that exposes internal hostnames becomes the reconnaissance step that enables a high-severity exploit on a system that should not have been discoverable.

Modern attack path analysis tools (BloodHound for Active Directory, various cloud-specific tools for AWS and Azure) can model these chains. The findings they produce often look unremarkable individually and become critical when viewed as a path. Including attack path context in prioritisation is one of the higher-impact things a mature programme can do.

A composite scoring approach

Putting the pieces together, a workable composite for prioritisation roughly looks like:

Priority = f(CVSS severity,
             EPSS exploitation probability,
             CISA KEV listed (binary),
             Asset criticality,
             Asset exposure,
             Attack path implications)

The exact weighting varies by organisation, but the typical pattern is:

  • CISA KEV listed and internet-facing. Top priority regardless of other factors. Fix in 14 days.
  • High EPSS and critical asset. Very high priority. Fix in 30 days.
  • High CVSS, high EPSS, low asset criticality. Moderate priority. Fix in 60 to 90 days.
  • High CVSS, low EPSS, low criticality. Backlog. Fix in normal patch cycle (90 plus days).
  • Low CVSS but on attack path to critical asset. Re-rank as moderate or higher depending on path.

This is more nuanced than "fix all Criticals in 30 days" but it is also more honest. It produces a prioritised list that a team can actually execute on.

Why one-size-fits-all SLAs fail

The classic VM SLA goes: Critical in 14 days, High in 30, Medium in 90, Low in 180. Universally adopted and almost universally missed. The reasons:

  • Volume. The number of Criticals exceeds capacity by roughly an order of magnitude in most enterprises.
  • Lack of context. Some Criticals on isolated systems do not need 14-day remediation. Some Mediums on critical systems need 7-day.
  • Patching constraints. Industrial control, embedded systems, and legacy applications often have valid reasons for longer cycles.
  • Compensating controls. A WAF rule or segmentation change can reduce risk faster than a patch.

The better pattern is risk-based SLAs: KEV-listed and internet-facing in 14 days, active exploitation evidence on critical assets in 30, theoretical High on critical systems in 60 (with compensating controls if available), everything else on normal patch cycle.

Best practices

  • Stop using CVSS as the only signal. It tells you severity. It does not tell you priority.
  • Subscribe to CISA KEV updates. New entries should trigger an automatic check against your asset inventory.
  • Pull EPSS daily. Scores change and the dynamic signal is most of the value.
  • Build asset criticality into your CMDB. Without it, prioritisation reverts to CVSS by default.
  • Tag exposure on every asset. Internet-facing, internal authenticated, segmented, air-gapped. Without this you cannot factor reachability into priority.
  • Run attack path analysis at least quarterly. The tools to do this are mature, especially for Active Directory and major cloud platforms.
  • Set risk-based SLAs. Tie SLA to composite risk, not raw CVSS.
  • Track what attackers actually used. When an incident happens, look at the entry vector. If it was something your scoring deprioritised, your scoring needs to change.
  • Validate compensating controls before relying on them. A WAF rule that nobody tested is not a control. It is hope.

The teams that do this well end up with a much smaller list of urgent items and a much higher rate of fixing the things that matter. The teams that do not end up perpetually behind on a list that nobody can complete.

ScruteX scores your exposures using CVSS, EPSS, KEV, asset context, and active threat intelligence, so your team fixes what attackers actually use first.

Learn more