Modernizing Vulnerability Sharing for a New Class of Threats
In cybersecurity, vulnerability information sharing frameworks have long assumed that conventional threats exploit flaws in software or systems, and they can be resolved with patches or configuration updates. AI and machine learning (ML) models upend that premise as adversarial attacks, like poisoning and evasion, target the unique way AI models process information. Consequently, the risks for AI systems include tactics like model poisoning (from evasion attacks) in datasets and training, which are not conventional software vulnerabilities. These new vulnerabilities fall outside the scope of traditional cybersecurity taxonomies like the Common Vulnerabilities and Exposures (CVE) Program.
There is a need to bridge the gap between the existing cybersecurity vulnerability sharing structure and burgeoning efforts to catalog security risks to AI systems. Provisions in the White House AI Action Plan, which Palo Alto Networks supports, call for the creation of an AI Information Sharing and Analysis Center (AI-ISAC), reinforcing the importance of addressing that disconnect. This integration is essential, as leveraging the existing, widely adopted cybersecurity infrastructure will be the fastest path to ensuring these new standards are accepted and operationalized.
Established Construct for Vulnerability Management and Disclosure
The global cybersecurity community relies on a mature infrastructure for sharing standardized vulnerability intelligence. Central to this ecosystem is the CVE List, established in 1999 as the authoritative catalog of cybersecurity vulnerabilities. Through CVE IDs and a network of CVE Numbering Authorities (CNAs), this framework enables consistent vulnerability documentation and disclosure.
Similarly, the Common Vulnerability Scoring System (CVSS) provides standardized severity assessments, allowing security teams to prioritize responses. Together with resources like the National Vulnerability Database (NVD) and CISA’s KEV Catalog catalog, these tools form the backbone of global vulnerability management, information sharing and coordinated disclosure.
Why AI Breaks the Traditional Model
While this infrastructure has served the cybersecurity community effectively for over two decades, it was designed around traditional threat models that AI systems substantially upend. Attacks on AI systems represent a critical departure from traditional cybersecurity threats as they operate insidiously, subtly corrupting core reasoning processes, causing persistent, systemic failures, some of which only become evident over time. Most traditional cybersecurity tools are not equipped to recognize those breakdowns because they assume deterministic behavior and rules-based logic. AI systems defy those assumptions because AI is probabilistic, not deterministic. Consequently, attacks on AI models may remain hidden for extended periods.
Unlike traditional cybersecurity threats that target code, adversarial AI attacks target the underlying data and algorithms that govern how AI systems learn, reason and make decisions. Consider the following predominant adversarial attack methodologies on machine learning:
- Poisoning attacks inject malicious data into training datasets, corrupting the model's learning process and creating deliberate vulnerabilities or degraded performance.
- Inference-related attacks exploit model outputs to extract sensitive information or learn about its training data. This includes model inversion, which reconstructs sensitive data from the model's outputs, as well as membership inference, which identifies whether specific data points were used in training.
The expansion of existing security frameworks and programs is necessary to cover the enumeration, disclosure and downstream management of security risks to AI systems.
Advancing AI Security Through the AI Action Plan
In July, the Administration unveiled the AI Action Plan, an innovation-first framework balancing AI advancement with security imperatives. The Plan prioritizes Secure-by-Design AI technologies and applications, strengthened critical infrastructure cybersecurity and protection of commercial and government AI innovations.
Notably, it recommends establishing an AI Information Sharing and Analysis Center (AI-ISAC) to facilitate threat intelligence sharing across U.S. critical infrastructure sectors and encourages sharing known AI vulnerabilities, “tak[ing] advantage of existing cyber vulnerability sharing mechanisms.” These provisions affirm that AI security underpins American leadership in the field and, where possible, should be built upon existing frameworks.
Redefining Boundaries for AI Threats
To position the CVE Program for the AI-driven future, Palo Alto Networks is engaging directly with industry and program stakeholders to chart the path forward. Traditionally, the CVE Program serves as an ecosystem-wide central warning system. It provides a unified source of truths for security risks. A security risk catalog and identification system are needed for AI systems, as they currently fall outside the traditional scope of the CVE Program that has focused exclusively on vulnerabilities rather than on malicious components. The historical aperture of the current CVE Program excludes harmful artifacts, such as backdoored AI models or poisoned datasets, which represent fundamentally different attack vectors, in turn creating security blind spots.
Securing AI’s Promise
The United States leads in AI innovation and must equally lead in securing it. As momentum builds behind the AI Action Plan and the establishment of the AI-ISAC, we have a critical window to shape information sharing frameworks of the future. The goal is to ensure that cybersecurity and AI security infrastructure advance in unison with the technology itself. Integrating new AI vulnerability standards into trusted frameworks like the CVE Program aligns with industry focus and needs. Through proactive, coordinated action, we can unlock AI’s full promise while safeguarding the models that are embedded in the critical systems on which our nation depends.