This year’s RSA Conference (RSAC) made one thing clear: the security landscape is evolving faster than ever. Among the most pressing concerns was the rise of agentic AI—autonomous systems capable of making independent decisions and executing tasks with little or no human oversight. While these technologies hold massive potential for automation, personalization, and productivity, they also introduce serious new risks, especially for highly regulated and high-value targets like financial institutions.
Cybersecurity professionals left San Francisco with a reinforced understanding that governance gaps and insider threats are the new front lines of application security. And as AI technologies like large language models (LLMs) become more embedded in banking, trading, and risk management workflows, so too do the threats they bring.
Unlike traditional AI, which is narrowly scoped and supervised, agentic AI can "decide and do." It reacts, learns, and executes based on environmental inputs, often interacting with APIs, databases, and users autonomously. This flexibility is powerful, but dangerous: agentic systems can be manipulated through prompt injection, poisoned via tainted training data, or exploited through insecure plugin integrations.
Combine these new risks with the already complex software environment of financial institutions, sprawling APIs, third-party vendors, cloud-based services, and the threat surface expands exponentially.
One of the key takeaways from RSAC 2025 was that technical threats aren’t the only issue; organizational blind spots are equally dangerous. Many institutions rush to integrate LLMs and agentic AI into customer service bots, fraud detection models, or credit risk engines without clear protocols for access control, output validation, or incident response.
In the absence of robust security training and governance frameworks, these tools can easily become vectors for data leakage, misinformation, or manipulation, either through malicious insiders or external threat actors exploiting poorly secured systems.
To meet the moment, organizations need to train their software teams, enabling developers, architects, DevOps, and security engineers to recognize and mitigate the unique threats that come with LLMs and autonomous AI. That’s where CMD+CTRL comes in.
Building on decades of secure coding instruction, CMD+CTRL now offers a growing catalog of AI and LLM-specific courses that tackle today’s most critical vulnerabilities head-on.
Here’s a snapshot of how CMD+CTRL’s platform addresses the top LLM risks identified across the industry:
LLM Threat Type | CMD+CTRL Training Coverage |
Prompt Injection | CYB 213 covers prompt injection attacks. Additional courses on input validation and secure coding available. Prompt Injection & Model Poisoning labs launch in Q2 2025. |
Sensitive Data Disclosure | Covered in CYB 213 and upcoming Privacy & Misuse in GenAI Apps labs (Q2 2025). Focus on output encoding and data sanitization. |
AI Supply Chain Attacks | Training on 3rd-party risk, dependency analysis, and software supply chain security. AI/ML Infrastructure Security module coming Q2 2025. |
Model Poisoning | Addressed in CYB 213. Labs focused on poisoning detection and mitigation launch Q2 2025. |
Improper Output Handling | CYB 213 teaches output filtering, overreliance issues, and malware generation. New GenAI labs expand this coverage. |
Excessive Agency & Plugin Abuse | Emphasizes least privilege, secure API design, and role-based access control. Labs coming Q2 2025. |
System Prompt Leakage | Covered under detection bypass and secure plugin development. |
Vector & Embedding Exploits | CYB 213 introduces model theft concepts. AI/ML Infrastructure Security labs dive deeper (Q2 2025). |
Misinformation Risks | Ethical AI usage and LLM output validation strategies addressed in CYB 213 and new labs. |
Resource Abuse (DoS/Consumption) | Labs on rate limiting and resource control launching in AI/ML Infrastructure Security series. |
As the financial industry embraces AI to streamline operations and enhance customer experiences, it must also invest in empowering its Builders, Operators, and Defenders. CMD+CTRL’s hands-on, role-based platform ensures that every stakeholder in the SDLC, from product managers and developers to DevOps engineers and security analysts, has the knowledge to identify, mitigate, and prevent vulnerabilities introduced by AI-enabled tools.
And with CMD+CTRL’s white-glove onboarding, real-world cyber ranges, and content aligned with standards like OWASP, PCI DSS, NIST, and GDPR, teams don’t just learn theory, they Build Skills That Stick.
Agentic AI is not just a technological evolution, it’s a governance revolution. Financial institutions that want to leverage the power of AI without incurring unacceptable risk must start by training their teams for a new reality: autonomous systems that act, react, and potentially fail in complex, often unpredictable ways.
CMD+CTRL offers the structured, hands-on training needed to address this paradigm shift. With AI-specific courses launching throughout 2025, your organization can confidently move forward while being secure and responsible.