Attack Surface Management

Cloud Security Misconfigurations

8 min read·Updated 2026-04-26
TL;DR

Cloud misconfigurations are settings that turn a normally safe service into an exposed one, usually with a single click or one line of Terraform. They are not bugs in the cloud provider's code. They are choices made by the people building on top. Almost every major public cloud breach of the past decade traces back to one, from Capital One in 2019 to the ICE biometric data leak in 2024.

What it is

A cloud misconfiguration is a setting on a cloud resource that creates exposure or risk where none was needed. The classic shape is a storage bucket whose access policy reads "public" when it should read "internal", but the category is much larger:

  • Storage buckets readable or writable by anyone on the internet
  • IAM roles with wildcard permissions attached to compute that did not need them
  • Security groups allowing 0.0.0.0/0 on port 22, 3389, or a database port
  • Metadata services accessible from inside a workload that is also reachable from the outside
  • Logging disabled on accounts where it should be on, or logs delivered to a destination nobody reads
  • Encryption keys with overly permissive policies, or KMS rotation switched off
  • Serverless functions with environment variables containing secrets in plaintext
  • DNS records pointing to deprovisioned cloud resources, ready to be hijacked

These are not vulnerabilities in the AWS, Azure, or GCP code. They are configuration choices made by customers, and the cloud provider almost always offers a safer default that someone overrode.

Why cloud is different from on-prem

Misconfigurations existed long before cloud. The difference is what they cost when they happen.

On-prem, exposing a database to the public internet required someone to walk to a switch, plug in a cable, configure a firewall rule, and probably get sign-off from a network team. Friction was a security control.

In cloud, the same exposure is a checkbox in a console or a single Terraform parameter. A developer with legitimate credentials can make a database publicly reachable in under a minute, often without any review. The infrastructure is programmable, which is the whole point, and that programmability cuts both ways.

Three structural facts make cloud misconfigurations especially dangerous:

The shared responsibility model. The cloud provider secures the cloud. You secure what you put in the cloud. This is well documented and still routinely misunderstood. Customers assume more is secured by default than actually is.

Accidentally-public is one click away. Default permissions are usually safe, but the path from safe to dangerous is short and reversible. One commit to an IaC repo can flip a thousand resources from private to public.

Asset inventory is hard. Most enterprises run dozens to hundreds of cloud accounts across multiple providers. Nobody has a complete picture, and the picture changes daily.

Why it matters

A short tour of public incidents tells the story.

Capital One, 2019. A misconfigured WAF combined with overly permissive IAM allowed an attacker to pull data on roughly 100 million customers from S3. The bank paid an $80 million regulatory fine and settled class actions for hundreds of millions more.

Pegasus contractor, 2024. A leak from a contractor working with NSO-adjacent tooling exposed sensitive operational data via an unprotected cloud storage account.

FactSet, 2024. A misconfigured cloud storage container exposed customer-related data to the public internet for an extended period before discovery.

ICE biometric data, 2024. Sensitive biometric records linked to immigration enforcement appeared on a publicly accessible cloud bucket, prompting investigations into how the access controls had been set.

The pattern is the same every time. The provider did its job. Someone ticked the wrong box, applied the wrong policy, or never reviewed the defaults.

How attackers exploit it

Attackers do not need to be sophisticated to find cloud misconfigurations at scale. Public scanners and search engines do most of the work.

The typical playbook:

  1. Hunt for exposed storage. Tools and search engines index publicly readable S3, Azure Blob, and GCS containers. Targeted searches by bucket name, owning account, or content keyword turn up huge volumes of data.
  2. Probe metadata services. A web app vulnerable to SSRF can be coaxed into fetching http://169.254.169.254/. On AWS instances still using IMDSv1, this returns the instance's IAM role credentials. On IMDSv2, the request requires a session token, which blunts the attack but only if the workload is configured to require v2.
  3. Pivot through IAM. A leaked or harvested credential gets fed into tools that enumerate every action it is allowed to perform. Wildcard policies and assumed-role chains often reveal a path from a low-value workload to admin.
  4. Take over dangling DNS. A subdomain still pointing at a CloudFront distribution or storage bucket that was deleted can be claimed by the attacker. They register a new resource of the same name and now control content under your domain.
  5. Read CloudTrail blind spots. When logging is off, scoped to the wrong region, or shipped to a forgotten S3 bucket, attackers operate without leaving the trail defenders rely on.

The Capital One attack used exactly this pattern. SSRF against a web app, metadata service, IAM role, S3 read.

Misconfigurations versus vulnerabilities

It is worth being precise about the difference, because the controls that fix each are different.

A vulnerability is a flaw in software code that an attacker exploits. Patching the software fixes the vulnerability.

A misconfiguration is a setting on a working piece of software that creates exposure. Changing the setting fixes the misconfiguration. There is nothing to patch.

Vulnerability scanners look for software versions and known CVEs. Cloud security posture management (CSPM) tools look for unsafe settings. They overlap at the edges (some tools do both), but they are different categories of control.

Most modern programmes need both. CVE patching does nothing about a public bucket. CSPM does nothing about a vulnerable web framework version.

How to detect it

Detection of cloud misconfigurations rests on a small set of techniques:

  • Cloud Security Posture Management (CSPM). A category of tools that connects to cloud accounts via read-only credentials and continuously evaluates resources against a library of rules (CIS Benchmarks, provider best practices, custom policies). The output is a list of findings ranked by severity.
  • Infrastructure as Code (IaC) scanning. Tools like Checkov, tfsec, and KICS read Terraform, CloudFormation, and Kubernetes manifests in your repo and flag misconfigurations before they ever reach production. The earlier you catch a misconfig, the cheaper it is to fix.
  • External attack surface scanning. From the outside, looking for what an attacker would see. Public buckets, exposed management ports, dangling DNS records, and accidentally-public services all show up here even when CSPM does not have access to the account.
  • CloudTrail and equivalent log analysis. Detect changes to security-sensitive settings as they happen. Alert when a new IAM policy with wildcards is created, when a bucket goes public, or when MFA is disabled on a privileged account.
  • Configuration drift detection. Compare the current state of cloud resources against the state defined in IaC. Drift often indicates either an unauthorised change or a fix that bypassed the normal pipeline.

The output of these tools is only useful if someone owns it. A CSPM with five thousand findings and no remediation pipeline produces dashboards, not security.

How to remediate

Remediation depends on the type of finding, but the broad pattern looks like this:

  1. Confirm the exposure. A finding from a tool is a signal, not a verdict. Verify the resource is what the tool says it is, and the configuration is what was reported.
  2. Assess blast radius. What data, access, or capability does this misconfiguration grant? A public bucket containing marketing PDFs is different from a public bucket containing customer records.
  3. Fix it at the source. If the resource was provisioned via IaC, fix the IaC and let the pipeline reapply. Manual fixes in the console drift back almost immediately.
  4. Audit recent activity. If the resource was exposed for any meaningful time, review access logs to see whether anyone took advantage. Look for unusual source IPs, unexpected data egress, or actions outside normal patterns.
  5. Apply compensating controls. Where the fix takes time, restrict access via security groups, network ACLs, or service control policies in the meantime.
  6. Update the policy. A finding that recurred from a single misconfiguration is one thing. A finding that is the third instance of the same mistake means the guardrails are not tight enough.

For the high-frequency findings, automation pays for itself quickly. Auto-remediation rules that close obvious public buckets, revoke 0.0.0.0/0 management access, or rotate exposed credentials prevent the same incident from happening again.

Best practices

  • Default to private, then open deliberately. Every cloud resource should start with the most restrictive permissions. Opening access requires a justification and ideally a review.
  • Enforce IMDSv2 everywhere. The session-token requirement breaks the simplest SSRF-to-credentials chain. There is rarely a good reason to allow IMDSv1 in 2026.
  • Run IaC scanning in CI. Catch misconfigurations before they hit production. Every Terraform plan should pass a policy gate.
  • Use service control policies (SCPs) and Azure Policy guardrails. Some configurations should never be possible regardless of who tries. SCPs let you enforce that organisationally.
  • Monitor CloudTrail centrally. Logs that nobody reads are not logs. Centralise, alert, and review.
  • Watch DNS for dangling records. Continuous resolution of your DNS zones against current cloud resource state catches takeover risks early.
  • Limit blast radius of credentials. Short-lived, narrowly scoped IAM roles. No long-lived keys checked into anything. Access keys for humans should be rare to nonexistent in 2026.
  • Treat external scans as ground truth. What an attacker sees from outside is what matters most. Your internal asset list will always have gaps.

What does not work

A few approaches consistently underperform against cloud misconfigurations:

  • Annual cloud audits. The configuration changes thousands of times between audits. Annual snapshots are useful for compliance, useless for security.
  • Console-only management. Anything important configured by clicking around in a console will drift, because the next person who clicks around has different opinions.
  • CSPM with no remediation pipeline. Finding the misconfigurations is the easy part. Fixing them is where most programmes stall.
  • Trusting that the cloud provider secured it. The shared responsibility model is real. Read the documentation for each service you use and know which side of the line each setting falls on.

Cloud is fundamentally more secure than the on-prem equivalent in many ways. Patching is faster, defaults are safer, telemetry is richer. The catch is that the same programmability that makes those wins possible also makes one bad commit a public incident. Continuous discovery and continuous remediation are how that risk gets managed.

ScruteX continuously discovers cloud assets, identifies exposed buckets and APIs, and tracks misconfigurations across AWS, Azure, and GCP.

Learn more