New Namespace Reuse Vulnerability Enables Remote Code Execution in Microsoft Azure AI, Google Vertex AI, and Hugging Face
Cybersecurity researchers have identified a critical vulnerability in the artificial intelligence supply chain that allows attackers to achieve remote code execution across major cloud platforms, including Microsoft Azure AI Foundry and Google Vertex AI, as well as thousands of open-source projects. This newly discovered attack method, known as “Model Namespace Reuse,” exploits a significant flaw in how AI platforms manage and trust model identifiers within the Hugging Face ecosystem. The vulnerability arises from Hugging Face’s namespace management system, which uses a two-part naming convention: Author/ModelName. When authors or organisations delete their accounts, their unique namespaces become available for re-registration rather than being permanently reserved. This situation creates an opportunity for malicious actors to register previously used namespaces and upload compromised models under trusted names, potentially impacting any system that references models by name alone.
The research conducted by Palo Alto Networks revealed that this vulnerability not only affects direct integrations with Hugging Face but also extends to major cloud AI services that incorporate Hugging Face models into their offerings. The attack mechanism operates through two primary scenarios: first, when a model author’s account is deleted, the namespace becomes immediately available for re-registration; second, ownership transfers occur when models are moved to new organisations, followed by the deletion of the original author account. In both scenarios, malicious actors can exploit namespace reuse to substitute legitimate models with compromised versions containing malicious payloads. The researchers demonstrated the practical impact of this vulnerability through controlled proof-of-concept attacks against Google Vertex AI and Microsoft Azure AI Foundry, successfully registering abandoned namespaces and uploading models embedded with reverse shell payloads that executed automatically upon deployment, granting attackers access to the underlying infrastructure.
Categories: Cybersecurity Vulnerability, AI Supply Chain Attack, Model Namespace Management
Tags: Vulnerability, Artificial Intelligence, Supply Chain, Remote Code Execution, Hugging Face, Namespace Management, Malicious Actors, Cloud Platforms, Model Identification, Security Practices