A Wake-Up Call for AI Security: Uncovering Remote Code Execution Vulnerabilities
In a world where artificial intelligence (AI) and machine learning (ML) are rapidly advancing, we've uncovered a critical security issue that could potentially impact the entire ecosystem. Remote code execution (RCE) vulnerabilities have been identified in three popular open-source AI/ML Python libraries, and it's time to shine a light on this hidden threat.
The Vulnerable Libraries:
- NeMo: Developed by NVIDIA, this PyTorch-based framework is designed for diverse AI/ML model development and complex systems. It's widely used and has been downloaded millions of times.
- Uni2TS: Created by Salesforce's AI research team, Uni2TS is a PyTorch library utilized in their Morai foundation model for time series analysis.
- FlexTok: A research-focused Python framework from Apple and the Swiss Federal Institute of Technology, enabling image processing for AI/ML models.
These libraries, though created for research purposes, are integral to popular models on HuggingFace, a leading platform for AI/ML development.
The Vulnerability Unveiled:
The issue lies in how these libraries handle metadata. Vulnerable versions of these libraries execute provided data as code when loading model files with malicious metadata. This means an attacker can embed arbitrary code, which automatically executes when the libraries load these modified models.
As of December 2025, no malicious attacks using these vulnerabilities have been found in the wild. However, the potential for misuse is significant, and Palo Alto Networks took proactive steps to notify affected vendors in April 2025, giving them time to address the issues before public disclosure.
Mitigations and Responses:
- NVIDIA issued CVE-2025-23304, rated High severity, and released a fix in NeMo version 2.3.2.
- The FlexTok researchers updated their code in June 2025 to resolve the issues.
- Salesforce issued CVE-2026-22584, also rated High severity, and deployed a fix on July 31, 2025.
The Role of Prisma AIRS:
Prisma AIRS, a tool developed by Palo Alto Networks, played a crucial role in identifying these vulnerabilities. It can detect models leveraging these vulnerabilities and extract their payloads, providing a powerful defense mechanism.
Protecting Palo Alto Networks Customers:
Palo Alto Networks customers are better equipped to handle these threats through various products and services:
- Cortex Cloud's Vulnerability Management identifies and manages base images, alerting on vulnerabilities and misconfigurations.
- The Unit 42 AI Security Assessment helps organizations navigate AI adoption risks and strengthen governance.
A Call to Action:
While these vulnerabilities have been addressed, the underlying issue of secure model formats and libraries persists. As AI/ML advances, the potential for misuse grows. Developers and researchers must remain vigilant, ensuring that their models and libraries are secure and resistant to exploitation.
And this is the part most people miss...
The security of AI/ML models and libraries is a shared responsibility. As we continue to innovate and push the boundaries of what's possible, we must also prioritize security. It's a delicate balance, but one that's crucial for the future of AI/ML development.
What are your thoughts on this critical issue? Share your insights and opinions in the comments below!