On June 6, 2025, Reuters reported that OpenAI is appealing a U.S. court order requiring the company to preserve all user interactions with ChatGPT and its API, including conversations that users deleted. This legal mandate stems from an ongoing copyright lawsuit filed by The New York Times and has effectively suspended OpenAI’s standard data deletion practices for a wide set of users.

According to OpenAI, the preservation order applies to non-enterprise users, covering interactions from ChatGPT Free, Plus, Pro, Team, and standard API usage. Only customers with Enterprise, Education, or Zero Data Retention (ZDR) agreements are excluded. While OpenAI says access to retained data is restricted to legal and security personnel, the fact remains: organizations now face the possibility that sensitive data sent to GenAI services may be retained indefinitely, regardless of whether or not those communications are deleted on the user’s side.

Screenshot-2025-02-20-at-6.55.39 AM-2048x1156-1-1024x578

For security leaders, this changes the risk model. Any sensitive data passed to GenAI APIs may persist outside organizational boundaries – whether or not that sharing was intended. In this new threat landscape, security programs must evolve from point-in-time defenses to runtime protection and active enforcement.

A New Frontier with New Challenges: Protecting Sensitive Data in the GenAI Era

As enterprises integrate GenAI into business-critical workflows, they open new channels where sensitive data like PII, PHI, intellectual property, and source code can be exposed, retained, or exfiltrated. Unlike traditional systems, GenAI workloads are dynamic, externally connected, and difficult to monitor. Below, we dive into the key challenges that organizations face when securing GenAI workloads.

Key Data Protection Challenges Emerging from GenAI

  1. Unpredictable and Adaptive Behavior: GenAI systems generate outputs based on complex prompts and real-time data. Sensitive information passed during these interactions, such as internal documents, customer records, or proprietary prompts, may be retained – even unintentionally. Without understanding how GenAI services process and store this data, security leaders risk losing control over high-value assets.
  2. Cross-Layer Exposure Risks: Data-in-motion isn’t confined to application-layer risks. Sensitive payloads can be intercepted or mishandled at the network (Layer 3) or transport (Layer 4) level. GenAI workloads often communicate with external APIs, where unencrypted or misrouted traffic could lead to unauthorized access or data leakage.
  3. Persistent Retention Beyond the Organization’s Control: As OpenAI’s new policy demonstrates, sensitive data shared via APIs may be stored indefinitely – even after users delete the accounts. This creates an increasingly complex compliance burden for enterprises under regulations like GDPR, HIPAA, or CCPA, and increases the risk of future unauthorized disclosure.
  4. Undetected Use of Unvetted AI Services: Developers or workloads may unknowingly transmit sensitive data to unsupported or non-compliant GenAI services. This “shadow AI” activity creates invisible risk for security leaders, especially if the data crosses regulatory boundaries or involves third-party LLMs without data residency guarantees.
  5. Lack of Continuous Monitoring: Without runtime observability, organizations cannot detect when sensitive data is unexpectedly exposed, making it difficult for security teams to identify behavioral drift or unapproved access patterns that could lead to a data incident.

How Upwind Secures Sensitive Data in GenAI Workloads

Upwind provides runtime-native security that protects sensitive data across all stages of GenAI usage. Below, we dive into how this provides organizations with increased visibility into their sensitive data transmission and GenAI usage.

1. Full Visibility into GenAI API Calls Involving Sensitive Data

Upwind continuously monitors which workloads interact with GenAI services and what data they transmit. Upwind maps and visualizes outbound AI service communications across cloud environments, monitoring which workloads interact with external AI models such as OpenAI, AWS Bedrock, Azure OpenAI, and GCP Vertex AI. This ensures visibility into AI data flows and prevents unauthorized access.Whether it’s personally identifiable information, customer support data, or embedded credentials, Upwind maps all GenAI-related data movement in real time.

Initial-state-2-2-1024x685

This ensures security teams can:

  • Identify which services are sending sensitive data externally
  • Flag unapproved destinations or risky data payloads
  • Enforce organizational policies about where and how GenAI can be used

2. Real-Time Detection of Data Exfiltration and Misuse

Upwind detects when sensitive data is being sent inappropriately, whether to unauthorized APIs, unknown LLMs, or manipulated prompts designed to extract internal content. This includes:

  • Outbound API calls that carry sensitive or classified information
  • Prompt injection attempts that leak confidential model knowledge
  • Suspicious traffic patterns suggesting data scraping or beaconing
Screenshot-2025-03-21-at-11.07.22 PM-2048x1396-6-1024x698

By monitoring runtime behavior across Layers 3, 4, and 7, Upwind correlates network-level traffic with sensitive data risk, providing early warning before damage occurs.

3. GenAI-Aware Posture Management to Prevent Sensitive Data Exposure

Upwind’s CSPM capabilities go beyond traditional posture best practices to detect GenAI-specific posture issues that could lead to data loss, including:

  • Publicly accessible GenAI endpoints with no access control
  • Over-permissive IAM policies that allow any workload to invoke AI APIs
  • Untracked use of open-source models with unknown data handling policies
Screenshot-2025-03-21-at-11.08.17 PM-2-1024x958

This posture intelligence enables teams to tighten access and reduce the attack surface, ultimately reducing the risk of data exposure. 

4. Monitoring Outbound API Payloads for Sensitive Data Exposure

Upwind identifies sensitive data types such as PII, PHI, and PCI, tracking how they move through GenAI communications. To prevent sensitive data from being unintentionally sent to GenAI services, Upwind inspects outbound API payloads whenever AI service communication is detected.

  • Regex-Based Detection: Upwind scans payloads for patterns like PII, API keys, and credentials to flag potential exposures.
  • AI-Powered Analysis (coming soon): Upwind will enhance detection by identifying obfuscated or contextual data leakage using machine learning.
Initial-state-3-3-1024x685

This ensures sensitive data in motion is continuously monitored and protected at the point of egress.

Learn More

The modern enterprise is embracing GenAI, but doing so responsibly requires recognizing that this is not business as usual. GenAI introduces a new frontier of unpredictable workloads, persistent data risk, and evolving attack surfaces.

Upwind equips security leaders with the tools to see, control, and secure GenAI interactions in real time – enabling safe AI adoption without compromising sensitive data or compliance posture.

Ready to protect your GenAI workloads? Request a demo to see how Upwind delivers runtime-native security built for the age of AI.