
For the first time, a zero-day vulnerability in the Linux kernel has been discovered using a large language model, OpenAI’s o3. Discovered by security researcher Sean Heelan and assigned CVE-2025-37899
, this vulnerability marks a milestone not just in cybersecurity but in the integration of AI into vulnerability research. It also raises serious questions about the evolving role of AI in both defense and offense.
Understanding CVE-2025-37899
CVE-2025-37899
is a use-after-free vulnerability located in the ksmbd
component of the Linux kernel, which handles the SMB3 protocol for file sharing. The flaw arises in the handling of the SMB2 LOGOFF
command. When a client sends a LOGOFF
request, the server’s handler function (smb2_session_logoff()
) frees the sess->user
object associated with the session:
if (sess->user) {
ksmbd_free_user(sess->user);
sess->user = NULL;
}
Copied
The vulnerability manifests when multiple connections are bound to the same session. While one thread processes the LOGOFF
and frees sess->user
, another thread might still be handling requests that access this now-freed memory, leading to a classic use-after-free scenario. This unsynchronized access can result in memory corruption, potentially allowing attackers to execute arbitrary code with kernel privileges.

Compounding the issue is the lack of proper reference counting or locking mechanisms around the sess->user
pointer. For instance, functions like smb2_check_user_session()
increment the session’s reference count but do not safeguard sess->user
from being freed by another thread. This oversight opens a window where one thread can dereference a freed pointer, as seen in typical code patterns:
if (user_guest(sess->user)) // Potentially dereferences freed memory
ksmbd_compare_user(sess->user, …)
sess->user->uid
Copied
The discovery of this vulnerability underscores the importance of concurrency control in kernel modules, especially those handling network protocols. It also highlights the potential of AI tools like OpenAI’s o3 model in identifying complex, concurrency-related vulnerabilities that might be challenging to detect through traditional means.
Why o3 Made This Possible
The o3 model, released in mid-April 2025, represents OpenAI’s next step in making models think deeper before replying. This is significant in security research, where catching a subtle bug often means reasoning through multiple layers of logic, timing, and code interaction.
To discover CVE-2025-37899
, security researcher Sean Heelan used OpenAI’s o3 large language model as part of a benchmarking experiment designed to evaluate its capability to reason about complex, concurrent code. Heelan provided o3 with the full implementation of all SMB command handlers within the Linux kernel’s ksmbd
module—roughly 12,000 lines of code—along with connection setup, teardown, and dispatch logic. Prompted specifically to look for use-after-free vulnerabilities, o3 identified a subtle concurrency issue in the smb2_session_logoff
handler. It reasoned that when multiple connections bind to the same session, one thread could free the sess->user
object during logoff while another thread continued accessing it—resulting in a classic use-after-free scenario.
What gave o3 an edge here wasn’t just code understanding, it was the ability to trace logic across threads, follow complex execution paths, and hypothesize unsafe conditions that most linters or pattern-based scanners would miss.
The Double-Edged Sword of AI in Security
The discovery of CVE-2025-37899
reflects the power of AI for defense. It allows researchers to rapidly analyze complex codebases, uncover edge-case vulnerabilities, and enhance overall security posture.
However, it also underscores a critical concern – if defenders can find vulnerabilities with AI, so can attackers.
As AI models become more accessible and capable, the security community must prepare for a future where LLMs accelerate both vulnerability discovery and exploitation. The challenge will be staying ahead, using these tools to secure systems before threat actors can weaponize them.
How Upwind Identifies Zero Days & Secures AI
At Upwind, we’re building security tools designed for this new era of AI-accelerated vulnerability discovery. Our vulnerability dashboard allows teams to instantly identify and prioritize newly published CVEs like CVE-2025-37899
across their environments.

In the case of a zero day, Upwind’s SBOM Explorer makes it easy for teams to search their environment for vulnerable packages – accelerating time to remediation and creating a streamlined process for finding and fixing zero day vulnerabilities.

Combined with our real-time threat detection engine, Upwind doesn’t just flag the presence of a vulnerability, we actively monitor for suspicious behavior in live cloud workloads. As attackers move faster, Upwind ensures defenders stay one step ahead.

In addition to accelerating zero-day remediation efforts, Upwind also proactively secures AI workloads and secures communication to GenAI services – providing a deep layer of protection as modern workloads increasingly implement AI.

Learn More
To learn more about how Upwind secures AI workloads, monitors communication to AI services and proactively identifies suspicious traffic in user environments through baselining capabilities, schedule a demo today.