The Open Worldwide Application Security (OWASP) Top 10 Risk & Mitigation lists for web and mobile applications have long been go-to help for developers. Keeping pace with the wild escalation in AI adoption is the OWASP 2025 Top 10 Risk and Mitigations report for Large Language Models (LLMs) and Gen AI apps. The Top 10 List is just one element of OWASP's LLM Application Security Project new security guidance for developers, data scientists and security experts. Its goal is to help users bridge the gap between general application security principles and the unique challenges posed by LLMs.
OWASP 2025 Top 10 Risk & Mitigations for LLMs and Gen AI Apps
Here are the most current risks, vulnerabilities and mitigations for developing and securing Gen AI and LLMs across the development, deployment and management lifecycle.
- Prompt Injection Vulnerability: This occurs when user prompts alter the LLM’s behavior or output in unintended ways and can affect the model's processes.
- Sensitive Information Disclosure: LLMs, especially when embedded in applications, risk exposing sensitive data, proprietary algorithms or confidential details through their output. This can result in unauthorized data access, privacy violations and intellectual property breaches.
- LLM Supply Chain: This occurs through supply chain and third-party package vulnerabilities that can affect training data, models and Gen AI deployment platforms.
- Data and Model Poisoning: Occurs when pre-training, fine-tuning or embedded data is manipulated to introduce vulnerabilities, backdoors or biases. This manipulation can compromise model security, performance or ethical behavior, leading to harmful outputs or impaired capabilities.
- Improper Output Handling: Vulnerabilities can be created by insufficient validation, sanitization and handling of LLM outputs before they are passed downstream to other components and systems.
- Excessive Agency: These include vulnerabilities that enable damaging actions to be performed in response to unexpected, ambiguous or manipulated outputs from an LLM, regardless of what's causing the LLM to malfunction.
- System Prompt Leakage: This occurs when system prompts or instructions used to steer the model's behavior also contain sensitive information that was not intended to be discovered.
- Vector and Embedding Weaknesses: These vulnerabilities present significant security risks in systems relying on Retrieval Augmented Generation (RAG) with LLMs. Weaknesses in how vectors and embeddings are generated, stored, or retrieved can be exploited by malicious actions to inject harmful content, manipulate model outputs or access sensitive information.
- Misinformation: Misinformation occurs when LLMs produce false or misleading information—such as a hallucination—that appears credible. This vulnerability can lead to security breaches, reputational damage and legal liability.
- Unbounded Consumption: This occurs when an LLM application allows users to conduct excessive and uncontrolled inferences, which can lead to denial of service (DoS), economic losses, model theft and service degradation.
Guidance Beyond the SDLC
OWASP also announced additional resources for stakeholders across the enterprise, including C-level executives, compliance and security leaders:
- The LLM Cybersecurity and Governance Checklist: Governance, risk management and compliance guidelines for LLM deployment for executive, tech, security, compliance and legal leaders
- The Guide for Preparing and Responding to Deepfake Events: Practical defense strategies to ensure organizations are secure as deepfake technology continues to improve
- The Center of Excellence Guide: Business framework and best practices to help organizations establish an AI Security center of excellence—as well as develop and enforce security policies, educate staff on AI use and ensure that generative AI technologies are deployed securely and responsibly
- The AI Security Solution Landscape Guide: A comprehensive reference on open source and commercial solutions for securing LLMs and generative AI applications
- Working groups: Groups focusing on Risk and Exploit Data Mapping, LLM AI Cyber Threat Intelligence, Secure AI Adoption and AI Red Teaming & Evaluation
OWASP Training from CMD+CTRL
The CMD+CTRL team is continually monitoring updates to OWASP guidance and introducing new curriculum and learning tools that align to industry standards and help you stay ahead of emerging threats. We offer more than 250 courses focused on OWASP risk and mitigations for web, mobile, and now LLMs plus over 100 labs and 12 cyber ranges focused on application development. Looking to uplevel your team’s security skills? Check out our tailored learning Journeys and course catalog here.