Skip to content
#

genai-security

Here are 18 public repositories matching this topic...

The GenAI API Pentest Platform is a API security testing tool that leverages multiple Large Language Models (LLMs) to perform intelligent, context-aware API security assessments. Unlike traditional tools that rely on pattern matching, this platform uses AI to understand logic, predict vulnerabilities, and generate sophisticated attack scenario.

  • Updated Aug 21, 2025
  • Python

The most comprehensive open-source mapping of OWASP GenAI risks to industry frameworks — 37 files, 16 frameworks, 3 source lists: LLM Top 10, Agentic Top 10, DSGAI 2026. OT/ICS, EU AI Act, NIST, ISO 27001, ISO 42001, CIS, SAMM, ENISA, NHI, AIVSS.

  • Updated Apr 6, 2026
  • JavaScript

LLM Sentinel Red Teaming Platform is an enterprise-grade framework for automated security testing of Large Language Models, detecting vulnerabilities such as jailbreaks, prompt injection, and system prompt leakage across multiple providers, with structured attack orchestration, risk scoring, and security reporting to harden models before production

  • Updated Mar 4, 2026
  • Python

ZeroShield AI Mesh Firewall is a centralized AI security gateway for governing LLM and RAG traffic with inline prompt injection defense, vector database isolation, multi model routing control, and compliance grade observability.

  • Updated Mar 20, 2026

Agentic AI Request Forgery (AARF) – New vulnerability class exploiting planner ➝ memory ➝ plugin chaining in MCP Server, MAS, LangChain, and A2A agents. Red Team playbooks, threat models, OWASP Top 10 proposal.

  • Updated May 12, 2025

Improve this page

Add a description, image, and links to the genai-security topic page so that developers can more easily learn about it.

Curate this topic

Add this topic to your repo

To associate your repository with the genai-security topic, visit your repo's landing page and select "manage topics."

Learn more