Please find below a sample list of AI secure coding challenges and their alignment with OWASP LLM Top 10. The goal in each challenge is to fix the security vulnerability in the app. These are mix of free and Pro challenges.
To get started with SecDim’s challenges, first complete Start Here.py.
LLM01:2023-Prompt Injection
- Prompt Injection.ml: AI applications have the potential to expose confidential information or intellectual assets. When a malicious user input is perceived as an instruction, the AI might unintentionally reveal sensitive data. In this challenge, we aim to harden the AI app to prevent a specific type of prompt injection attack.
- Prompt Injection II.ml: In this challenge, we aim to harden the AI app to prevent another type of prompt injection attack.
- Prompt Injection III.ml: In this challenge, we aim to harden the AI app to prevent yet another type of prompt injection attack.
LLM02:2023-Insecure Output Handling
- Insecure Output Handling.ml: Insecure Output Handling specifically refers to inadequate validation, sanitisation, and management of outputs generated by large language models before they are passed on to other components and systems. In this challenge, we will learn how to address insecure output handling in LLM apps.
LLM04:2023-Model Denial of Service
- DoS.ml: AI applications are vulnerable to denial of service (DoS) attacks, where attackers overwhelm the system with excessive requests, consuming resources and degrading service quality. In this challenge, we will learn how to address model denial of service in LLM apps.
LLM05:2023-Supply Chain Vulnerabilities
- Malicious Model.ml: This challenge highlights existing threats around using machine learning models.
- Malicious Model II.ml: This challenge highlights additional threats around using machine learning models.
- Malicious Model III.ml: This challenge focuses on even more threats to be aware of when using machine learning models.
LLM06:2023-Sensitive Information Disclosure
- Information Disclosure.ml: Large Language Models (LLMs) applications can inadvertently disclose sensitive information, proprietary algorithms, or other confidential details through their generated content. In this challenge, we will learn how to address sensitive information disclosure in LLM apps.
LLM07:2023-Insecure Plugin Design
- Insecure Plugin Design.ml: AI applications that utilize plugins are at risk due to insecure plugin design, where poorly implemented plugins can be exploited by attackers. These plugins, driven automatically by language models, may process user inputs without proper validation or type checking, leading to potential security vulnerabilities such as remote code execution.
LLM08:2023-Excessive Agency
- Excessive Agency.ml: AI applications can suffer from Excessive Agency, especially through plugins, when they are granted too much functionality, permissions, or autonomy, allowing them to perform damaging actions based on unexpected or ambiguous outputs. In this challenge, we will learn how to address Excessive Agency in LLMs and their plugins to prevent unintended and potentially harmful actions.
We host many other challenges that are not part of OWASP Top LLM 10 but appears in today’s modern apps. You can find these challenges by
- Browsing through vulnerabilities related to AI apps: Browse Challenges
- Viewing the AI game
Finally, you can win SecDim OWASP LLM TOP 10 Secure Developer in AI Badge to show your proficiency in building secure AI apps aligned with OWASP LLM TOP 10 recommendations.