The US National Security Agency (NSA) and international partners have released guidance on Agentic Artificial Intelligence Systems. The Press Release states:
Today, the National Security Agency (NSA) joins the
Australian Signals Directorate’s Australian Cyber Security Centre (ASD’s ACSC)
and others to release the Cybersecurity Information Sheet (CSI), “Careful Adoption of Agentic AI Services.”
This report is a comprehensive guide to understanding and mitigating the unique
risks associated with the rise of agentic artificial intelligence (AI) within
critical infrastructure, including the defense sector. The CSI highlights
general security considerations for agentic AI, including the inherited risks
of large language models (LLMs), increased attack surfaces, increased
complexity, the evolving security landscape as the technology matures, and the
need to address AI security as part of established cybersecurity
paradigms.
Unlike traditional generative AI, which typically requires human validation,
agentic AI systems are designed to operate autonomously, making them a powerful
tool. This presents both unprecedented opportunities and significant
cybersecurity challenges organizations must address to protect national
security and critical infrastructure.
“Careful Adoption of Agentic AI Services” outlines risk
spaces to consider, including:
• Privilege Risks: Over-privileged agents can amplify the
impact of a single compromise.
• Design and Configuration Risks: Insecure design and
provisioning can introduce vulnerabilities.
• Behavior Risks: Goal misalignment, specification gaming,
deceptive behavior, and emergent capabilities can lead to unexpected or
undesirable outcomes.
• Structural Risks: The interconnected nature of agentic
systems increases the attack surface and complexity.
• Accountability Risks: The opacity of agentic systems makes
accountability hard to trace, complicating auditing and compliance.
Securing agentic AI systems requires proactive measures that address risks
introduced by autonomy, interconnected components, and evolving capabilities.
The best practices for securing agentic AI systems are divided into the
following subcategories:
• Designing Secure Agents
• Developing Secure Agents
• Managing Third-Party Components
• Deploying Agents Securing
• Operating Agents Securely
The report recommends deploying agentic AI incrementally, continuously
assessing against evolving threat models, and maintaining strong governance,
explicit accountability, rigorous monitoring, and human oversight which are
essential for safe and secure operation.
Organizations that use agentic AI services, including those in the defense
sector, are encouraged to review this guidance and adopt the outlined
cybersecurity mitigations.
Other agencies co-sealing this CSI are the Canadian Centre for Cyber Security
(Cyber Centre), the U.S. Cybersecurity and Infrastructure Security Agency
(CISA), the New Zealand National Cyber Security Centre (NCSC-NZ), and the
United Kingdom National Cyber Security Centre (NCSC-UK).
No comments:
Post a Comment