In May 2025, cryptocurrency exchange Coinbase faced a Trojan Horse breach and ransomware attempt that earned them accolades for detecting and addressing the intrusion within hours. 

It started when an authorized user agreed to run a patch script that contained malware designed to exfiltrate sensitive data to attackers. Though the attack is the kind of event security teams dread, where authorized users with legitimate credentials might have gone undetected in the system, the breach is an unusual success story. Why?

In this case, teams adeptly connected data exfiltration alerts with associated IAM roles and identities, revoked access, and assessed the damage within hours.

That kind of correlation across systems in a globally distributed, cloud ecosystem is notoriously challenging, but for those seeking to walk in the footsteps of organizations that expertly make the connection and document what happened, Cloud-Native Application Protection Platforms (CNAPPs) are increasingly key. That stands to reason as CNAPPs bridge signals across identity, workload, and network layers, so teams can see abnormal processes, associated vulnerabilities, which identities initiated those workload actions, and if they’re initiating connections to a suspicious external IP. 

When the breach is identified and contained, how can teams keep using CNAPP to analyze what happened? Let’s talk about post-breach analysis.

What is Post-Breach Analysis?

Post-breach analysis is a structured investigation that uses forensic data to consolidate strategic learning following a cybersecurity break-in. It happens after a breach is contained and seeks to answer:

  • Why did this happen?
  • Why wasn’t it caught sooner?
  • What security measures failed, including our processes, tools, or oversight?
  • What should change to prevent recurrence?

Post-breach analysis doesn’t follow a universal standard, but it is typically structured around incident review frameworks built atop root cause analysis, postmortem engineering norms, and cloud security frameworks like the NIST/ISO guidance on security operations. In general, teams conducting a post-breach analysis look at:

After a thorough investigation of the forensics, most post-breach analyses roughly follow a step-by-step template that details:

  1. Executive Summary: What happened
  2. Incident Timeline: With key events tied to exact timestamps
  3. Attack Vector and TTPs: How the initial access occurred, with techniques used, mapped to MITRE ATT&CK
  4. Impact Assessment: Data exfiltrated, systems affected, and their downtimes, etc.
  5. Detection and Response Evaluation: Issues that contributed, like alert fatigue, misconfigured rules, and outdated threat models
  6. Root Cause Analysis: Not just the exploit, but also underlying architectural or process issues
  7. Lessons Learned: What would have made the attack less successful?
  8. Remediation and Recommendations: Mitigation and patching measures that are already deployed, but also future action items and owners.
  9. Policy and Process Gaps: Training issues, unclear paths that remain, and general coordination failures 

Regardless of which sections are included, a post-breach analysis is, at its heart, an actionable report that is structured and moves beyond the anecdotal to document the breach and its containment to arm security teams, leaders, and regulators with the insight they need to thwart future incidents that threaten data security.

Runtime Clarity for Post-Breach Investigation with Upwind

Upwind’s runtime-powered container scanning captures the full context of how attacks unfold in real time. That means post-breach analysis isn’t limited to logs or assumptions; you get instant visibility into lateral movement, exploited processes, and the root cause analysis that’s 10X faster than traditional methods.

What Makes Post-Breach Analysis in the Cloud So Hard?

Understanding a breach after it happens is hard enough in traditional environments. In the cloud, that challenge is compounded.

Only 12% of organizations fully recover from cloud security breaches, and over 75% of those take more than 100 days to restore normal operations.

It’s all because of the structure of the cloud and resources in it, with fast-moving infrastructure, complicated identity layers, and siloed signals that make it challenging to piece together security layers and get visibility into what happened, let alone why. 

Ephemeral Infrastructure Leaves No Trace

Containers, functions, and short-lived instances are part of the cloud environment. But they can disappear before anyone knows they were breached. If teams didn’t capture runtime data in the moment, they’d be left guessing from logs that may not exist at all.

Runtime tools allow container processes to be visible along with remediation, so teams can show they’ve addressed threats, even in multi-cloud and hybrid environments.
Runtime tools allow container processes to be visible along with remediation, so teams can show they’ve addressed threats, even in multi-cloud and hybrid environments.

Fragmented Data Slows Timelines

Logs are everywhere. From CloudTrail to VPC Flow Logs, CWPP workload agents, and SaaS integrations, teams may have no centralized view of cloud activity, and that means stitching together a timeline of events from multiple sources. The process can take weeks.

From IAM to workload and cloud storage, all resources are represented in a CNAPP that centralizes what teams can see and how quickly they can understand their environment.
From IAM to workload and cloud storage, all resources are represented in a CNAPP that centralizes what teams can see and how quickly they can understand their environment.

Identity Obfuscation Hides the ‘Who’

Cloud breaches rarely map to named users. Attackers who abuse token-based roles or assume permissions across services can overwhelm traditional detection tools, which can’t get a grasp on the complicated identity layers that define the cloud perimeter. As a result, legacy tools surface what resources were used in a breach, but not who triggered it.

Visibility into identities means that no matter where they’re used, or across which clouds, teams will know.
Visibility into identities means that no matter where they’re used, or across which clouds, teams will know.

Lack of Context Misses Causality

Even when teams can see the command or request that triggered a breach, it can be hard to know which vulnerability was exploited, which policy was bypassed, or what sequence of events led to that moment. 

Which access credentials were modified? Which containers are running abnormal processes? Ideally, CNAPPs assemble information in ways that help protect organizations better, and that context comes from specialized tools that can help teams get to root causes faster.
Which access credentials were modified? Which containers are running abnormal processes? Ideally, CNAPPs assemble information in ways that help protect organizations better, and that context comes from specialized tools that can help teams get to root causes faster.

Tool Sprawl Blurs Ownership

Finally, the cloud has necessitated cloud, security, and DevOps teams to use their own tools and dashboards, bringing their own limited points of view to the table. Everyone sees part of the breach, but no one sees — or owns — the whole picture.

 Infrastructure, identity, and application behavior are all parts of a breach path. Seeing them together should be part of security operations.
 Infrastructure, identity, and application behavior are all parts of a breach path. Seeing them together should be part of security operations.

How CNAPP Supports Post-Breach Investigation

CNAPPS aren’t cure-alls, but they can seriously accelerate post-breach analysis when used correctly. Here’s how they address the realities of the cloud ecosystem, what they contribute, and where their limits lie:

CNAPP StrengthHow it HelpsWhat it Doesn’t Do
Runtime ForensicsCaptures ephemeral activity in real timeDoesn’t recover events missed during downtime
Identity ContextMaps actions to rolesCan’t fix bad IAM hygiene on its own
Unified ViewCentralizes cloud, workload, and IAM dataDepends on full integrations 
Attack Path MappingConnects the exploit to impact across layersNeeds good asset tagging to work best
Misconfiguration and Drift DetectionFlags weak spots present at the time of breachCan’t bridge the abnormal activity signals to business narratives about the breach
Compliance MappingAligns findings and fixes with frameworksDoesn’t explain the incident in business terms

CNAPPS are accelerators, but they don’t assemble all the puzzle pieces. Teams are still the core of post-breach analysis. They’ll need to explain the larger picture and can strategize what it means and what to do. But in doing so, they can use CNAPPs to give them broader visibility, runtime forensics, and help assign accountability. 

Operationalizing CNAPP Post-Breach

A key point is that CNAPP doesn’t just help in the detection phase: it’s a way to reshape the entire investigation following a security incident, as teams move beyond looking at alerts to treating their CNAPP as a source of evidence that feeds root cause analysis, accountability, and eventually, the long-term fixes that create a strong security posture, preventing cyberattacks and data breaches. 

Start during the containment phase, speeding up the time it takes to get clarity on what’s happening. While the team generally uses an incident response plan (IRP) to guide containment and eradication, CNAPPs can speed the process.

Use CNAPP’s real-time feed on runtime to:

  • Isolate impacted workloads and affected systems. Identify the container or function behavior that’s involved
  • Trace IAM roles or service accounts tied to lateral movement
  • Identify whether runtime enforcement blocked or merely logged behavior

Ask: What did the attacker do after gaining access, and which assets were involved? 

What can you do to leverage CNAPP capabilities post-breach? There are key steps to take at each stage of the post-incident phase.

During Root-Cause Analysis

Your CNAPP’s context around misconfigurations, drift, and identity misuse can show whether cloud posture or IAM boundaries failed, flag violations that were previously considered normal, and tie runtime behavior back to infrastructure.

That gives teams the chance to reassess the security policies that allowed the behavior to succeed and update accordingly.

During Documentation

CNAPP can build a breach timeline with cloud-native signals, export workload-level events for regulators or postmortems, and map alerts to compliance.

That’s the data teams need to explain the breach clearly to auditors and stakeholders.

After Recovery

CNAPP insights should feed updated IaC templates and policy-as-code enforcement. They can help revise role assumptions and enact stricter IAM controls. And finally, CNAPP data can lead to new CSPM rules that prevent the same issue from recurring. 

Armed with the correlated assets, behaviors, identities, and timeline, teams can shut down the attack path so it can’t be used by future threat actors.

Using CNAPP to Build a Post-Breach Report

Here’s what to do in order to utilize your CNAPP as a post-breach postmortem engine:

Scope the Incident

  1. Identify affected resources through runtime and container activity
  2. Map attacker actions to specific IAM roles
  3. Capture lateral movement paths across cloud layers

Reconstruct the Timeline

  1. Export workload and cloud logs from the CNAPP
  2. Correlate key events across infrastructure, identity, and data layers
  3. Match CNAPP alerts to MITRE ATT&CK techniques, if available

Identify Failures

  1. Flag drifted configurations or violated policies at the breach time
  2. Note excessive permissions or IAM misuse
  3. Confirm if enforcement actions were attempted or missed

Document and Report

  1. Generate compliance-mapped summaries
  2. Attack CNAPP snapshots or exports to the postmortem
  3. Highlight recommendations that feed into IaC or CSPM guardrails

Rethinking Breach Analysis as Architecture Review

Breach analysis checklists are helpful, but too often, they serve as rote reports, taking the place of actual analysis. At their best, post-breach analyses are inputs into future enforcement logic and trust boundaries that are central to a thoughtful, customized security architecture.

What does that mean? If your breach report ends in a new patch, alert rule, or policy tweak, you’re missing the point. Breach root causes in the cloud are about structure and require some mental flexibility as the team reassesses their assumptions and rearchitects what didn’t hold under pressure. Deeper issues behind breaches might include:

  • Overly implicit trust boundaries, like assumed IAM roles
  • Broken detection assumptions, like a belief that XDR would catch container-based persistence
  • Flawed enforcement models, with confusion over who handles specific issues like drifted security groups

In these cases, breaches just serve to expose false assumptions. Maybe the most valuable function of a breach is to reveal what teams thought would hold — but didn’t.

Think of post-breach analyses as moments to examine new learnings. If reports led with “We believed runtime agents were deployed consistently. They were not,” teams may also come to approach security learnings with curiosity, not shame or blame.

Upwind Illuminates What Went Wrong (And Why)

Upwind gives security teams the data they need after a breach is contained, when the real questions begin. With runtime-powered visibility, telemetry that’s identity-aware, and policy drift detection across workloads and infrastructure, Upwind helps you explain not only that a breach happened, but how. That means teams can quickly trace the blast radius and fix the system-level causes that enabled it.

Post-breach analysis isn’t only about looking back. Upwind helps teams use it as a catalyst to move forward. To see how, schedule a demo.

FAQ

What is post-breach detection?

Post-breach detection refers to the process of identifying signs of a successful cyberattack that has already bypassed initial defenses. It focuses on spotting malicious activity in progress and often deep inside the environment. It includes:

  • Lateral movement detection
  • Behavioral anomalies
  • Data access anomalies
  • Persistence mechanisms

Post-breach detection from CNAPP or XDR tools gives organizations a second chance at detection when initial perimeter access detection fails.

What is the purpose of a post-breach analysis?

Post-breach analysis shouldn’t be an exercise in paperwork: it should provide insight into a security team’s assumptions and policies that were engineered to hold, but failed., in order to offer an actionable path forward The goal is to relay what happened and provide analysis for future improvements to make sure it won’t happen again. The post-beach analysis aims to document:

  • The breach timeline
  • The root cause
  • The detection and response gaps
  • The impact
  • The changes that need to happen to prevent future attacks

Is a post-incident review the same as a post-breach analysis?

These two terms are closely related, but they’re not the same.

A post-incident review is a broad term that can refer to any kind of incident, security-related or not. It may cover a misconfiguration, outage, near miss, or an actual breach.

A post-breach analysis focuses explicitly on breaches that have happened. It’s forensic, and usually includes compliance, threat intelligence, attacker behavior, and quantification of loss elements.

How does CNAPP compare to EDR or SIEM for forensic analysis?

Endpoint Detection & Response (EDR) and Security Information and Event Management (SIEM) cover different parts of the ecosystem and are used in different ways for forensic analysis. Real forensics comes from combining all three: CNAPP for cloud, EDR for endpoints, and SIEM for correlation.

CNAPP offers visibility into ephemeral cloud behavior like containers and IAM roles.

EDR gives deep insight into what happened on compromised machines.

SIEM correlates across systems and establishes timelines. It may include telemetry from systems that CNAPP doesn’t reach, like phishing attempts in the email activity on a specific endpoint surfaced by EDR.