Critical PyTorch Vulnerability Allows Hackers to Run Remote Code

A newly disclosed critical vulnerability (CVE-2025-32434) in PyTorch, the widely used open-source machine learning framework, allows attackers to execute arbitrary code on systems loading AI models—even when safety measures like weights_only=True are enabled.

The flaw impacts all PyTorch versions ≤2.5.1 and has been patched in version 2.6.0, released earlier this week.

CVE ID Severity Affected Versions Patched Version
CVE-2025-32434 Critical PyTorch ≤2.5.1 (pip) 2.6.0

Vulnerability Overview

The flaw resides in PyTorch’s torch.load() function, which is commonly used to load serialized AI models.

While enabling weights_only=True was previously believed to prevent unsafe code execution by restricting data loading to model weights, security researcher Ji’an Zhou demonstrated that attackers can bypass this safeguard to execute remote commands.

This undermines a core security assumption in PyTorch’s documentation, which explicitly recommended weights_only=True as a mitigation against malicious models.

Organizations using this setting to protect inference pipelines, federated learning systems, or model hubs are now at risk of remote takeover.

  • Exploitation Scenario: Attackers could upload tampered models to public repositories or supply chains. Loading such models would trigger code execution on victim systems.
  • Affected Workflows: Any application, cloud service, or research tool using torch.load() with unpatched PyTorch versions.
  • Severity: Rated critical due to the ease of exploitation and potential for full system compromise.
  1. Immediately upgrade to PyTorch 2.6.0 via pip install –upgrade pytorch.
  2. Audit existing models: Validate the integrity of models loaded from untrusted sources.
  3. Monitor advisories: Track updates via PyTorch’s GitHub Security page or the GitHub Advisory (GHSA-53q9-r3pm-6pq6).

The PyTorch team acknowledged the vulnerability, stating, “This issue highlights the evolving nature of ML security. We urge all users to update immediately and report suspicious model behavior.”

PyTorch is foundational to AI research and deployment, with users ranging from startups to tech giants like Meta and Microsoft.

This vulnerability exposes critical infrastructure to attacks that could steal data, disrupt services, or hijack resources.

As AI adoption grows, securing model pipelines is paramount. CVE-2025-32434 serves as a stark reminder that even trusted safeguards require continuous scrutiny.

Update PyTorch installations, audit model sources, and treat all third-party AI artifacts as potential attack vectors until verified.

Related Articles

Back to top button