cybersecurity skills gap

RSAC Rewind: Agentic AI and Governance Gaps

RSAC 2025 highlighted agentic AI risks and governance gaps stressing the need for specialized security training.

This year’s RSA Conference (RSAC) made one thing clear: the security landscape is evolving faster than ever. Among the most pressing concerns was the rise of agentic AI—autonomous systems capable of making independent decisions and executing tasks with little or no human oversight. While these technologies hold massive potential for automation, personalization, and productivity, they also introduce serious new risks.

Cybersecurity professionals left San Francisco with a reinforced understanding that governance gaps and insider threats are the new front lines of application security. And as AI technologies like large language models (LLMs) become more embedded in developer workflows, AppSec teams must stay ahead of this by deploying AI-aware security.

What Agentic AI Means for Security

Unlike traditional AI, which is narrowly scoped and supervised, agentic AI can "decide and do." It reacts, learns, and executes based on environmental inputs, often interacting with APIs, databases, and users autonomously. This flexibility is powerful, but dangerous: agentic systems can be manipulated through prompt injection, poisoned via tainted training data, or exploited through insecure plugin integrations.

Combine these new risks with complex application environments, sprawling APIs, third-party vendors and cloud-based services, and the threat surface expands exponentially.

A Shifting Threatscape

Simultaneously, identity emerged as a new line of defense with several companies rolling out products designed to monitor access and defend against identity-based attacks across the enterprise. With agentic AI increasingly integrated into enterprise systems, securing agent identities is critical. Okta and 1Password introduced  “MFA for robots” tools, emphasizing the need for strong credentialing, monitoring, and access controls to prevent rogue behavior by agent accounts.

Governance Gaps: The Real AI Threat

One of the key takeaways from RSAC 2025 was that technical threats aren’t the only issue; organizational blind spots are equally dangerous. Many institutions rush to integrate LLMs and agentic AI without clear protocols for access control, output validation, or incident response.

In the absence of robust security training and governance frameworks, these tools can easily become vectors for data leakage, misinformation, or manipulation, either through malicious insiders or external threat actors exploiting poorly secured systems.

AppSec Takeaways

  1. Embrace agentic AI responsibly - autonomous AI systems raise concerns about trust
  2. Identity Security - securing non-human identities is critical
  3. Build AI governance and security into the SDLC - adopt "secure-by-design" principles that incorporate AI
  4. Focus on securing AI systems - defending against data leakage and protecting AI models and systems from manipulation and attack is critical

How CMD+CTRL Security is Closing the Skills Gap

To meet the moment, organizations need to train their software teams, enabling developers, architects, DevOps, and security engineers to recognize and mitigate the unique threats that come with LLMs and autonomous AI. That’s where CMD+CTRL comes in.

Building on decades of secure coding instruction, CMD+CTRL offers a growing catalog of AI and LLM-specific courses and labs that tackle today’s most critical vulnerabilities head-on.

Here’s a snapshot of how CMD+CTRL’s platform addresses the 2025 OWASP Top 10 LLM and Gen AI risks:

LLM Threat Type CMD+CTRL Training Coverage
Prompt Injection Training available on prompt injection attacks, input validation and secure coding, featuring new prompt injection & model poisoning labs.
Sensitive Data Disclosure Privacy & misuse in GenAI Apps labs focus on output encoding and data sanitization.
AI Supply Chain Attacks Courses cover third-party risk, dependency analysis, and software supply chain security with a new AI/ML infrastructure security module.
Model Poisoning New labs focused on poisoning detection and mitigation.
Improper Output Handling Courses available on output filtering, overreliance issues, and malware generation with expanded coverage in new GenAI labs.
Excessive Agency & Plugin Abuse Courses emphasize least privilege, secure API design, and role-based access control. 
System Prompt Leakage Courses cover detection bypass and secure plugin development.
Vector & Embedding Exploits Courses cover model theft concepts with new AI/ML infrastructure security labs providing expanded coverage.
Misinformation Risks Courses and labs address ethical AI usage and LLM output validation strategies. 
Resource Abuse (DoS/Consumption) Labs offered on rate limiting and resource control as part of our AI/ML infrastructure security series.

Why Training Must Come First

As the enterprise embraces AI to streamline operations and enhance customer experiences, it must also invest in empowering its Builders, Operators, and Defenders. CMD+CTRL’s hands-on, role-based platform ensures that every stakeholder in the SDLC, from product managers and developers to DevOps engineers and security analysts, has the knowledge to identify, mitigate, and prevent vulnerabilities introduced by AI-enabled tools.

And with CMD+CTRL’s white-glove onboarding, real-world cyber ranges, and content aligned with standards like OWASP, PCI DSS, NIST, and GDPR, teams don’t just learn theory, they build skills that stick.

Conclusion: Future-Proofing with Security-First AI Training

Agentic AI is not just a technological evolution, it’s a governance revolution. Enterprises looking to leverage the power of AI without incurring unacceptable risk must start by training their teams for a new reality: autonomous systems that act, react, and potentially fail in complex, often unpredictable ways.

CMD+CTRL offers the structured, hands-on training needed to address this paradigm shift. With new AI-specific courses, your organization can deploy the latest technologies securely and responsibly. 

Build Securely, Innovate Confidently.

Empower your developers, security, and DevOps teams with the skills to mitigate LLM and AI risks. See our platform in action.

Similar posts