When AI Gets Tricked: Inside the GitLab Duo Prompt Injection Vulnerability

In May 2025, a significant vulnerability was disclosed in GitLab Duo, the AI-powered coding assistant integrated into GitLab’s DevSecOps platform. This flaw allowed attackers to exploit GitLab Duo’s context-aware features to exfiltrate private source code and inject malicious content into AI-generated responses.

What Happened?

Security researchers from Legit Security identified an indirect prompt injection vulnerability in GitLab Duo. Unlike direct prompt injections, this method involved embedding malicious instructions within various project elements such as merge request descriptions, commit messages, issue comments, and even source code. GitLab Duo, designed to analyze the entire project context to provide assistance, inadvertently processed these hidden prompts, leading to unintended behaviors. 

Attackers employed sophisticated obfuscation techniques, including Unicode smuggling, Base16 encoding, and KaTeX rendering in white text, to conceal these prompts from human reviewers while still being interpreted by the AI. 

The Exploit Chain

The vulnerability’s exploitation involved several steps:

  1. Embedding Hidden Prompts: Attackers inserted concealed instructions into project elements. 
  2. Triggering AI Processing: When a user interacted with GitLab Duo, the AI processed the entire context, including the malicious prompts. 
  3. Data Exfiltration via HTML Injection: GitLab Duo’s real-time markdown rendering converted responses into HTML. Attackers leveraged this by injecting <img> tags with src attributes pointing to attacker-controlled servers. These tags included Base64-encoded snippets of private source code, which browsers would then request, inadvertently sending sensitive data to the attackers.  

GitLab’s Response

Upon responsible disclosure on February 12, 2025, GitLab acknowledged the vulnerabilities and promptly released patches to address them. The company emphasized the importance of AI safety and the need for robust input sanitization, especially when integrating AI assistants into development workflows. 

Lessons Learned

This incident underscores the potential risks associated with integrating AI tools into software development processes:

  • Contextual Awareness as a Double-Edged Sword: While AI assistants like GitLab Duo aim to provide comprehensive assistance by analyzing entire project contexts, this feature can be exploited if not properly secured. 
  • Importance of Input Sanitization: Ensuring that all inputs, especially those rendered in HTML, are adequately sanitized is crucial to prevent injection attacks. 
  • Need for Continuous Security Assessments: As AI tools evolve, regular security evaluations are essential to identify and mitigate emerging threats.

Moving Forward

Organizations leveraging AI-powered development tools should:

  • Implement Strict Input Validation: Ensure that all user inputs are validated and sanitized before processing.
  • Educate Development Teams: Train developers on the potential risks associated with AI tools and best practices to mitigate them.
  • Stay Updated: Regularly update AI tools and monitor for security patches and advisories.

By taking these precautions, organizations can harness the benefits of AI in development while minimizing potential security risks.

For a visual demonstration of GitLab Duo’s vulnerability resolution capabilities, you can watch the following video:

Related Posts

Scroll to Top