A plugin-based gateway that orchestrates other MCPs and allows developers to build upon it enterprise-grade agents.
-
Updated
Jan 22, 2026 - Python
A plugin-based gateway that orchestrates other MCPs and allows developers to build upon it enterprise-grade agents.
An Execution Isolation Architecture for LLM-Based Agentic Systems
An intentionally vulnerable AI chatbot to learn and practice AI Security.
Curated list of links, references, books videos, tutorials (Free or Paid), Exploit, CTFs, Hacking Practices etc. which are related to GenAI and LLM Security
The GenAI API Pentest Platform is a API security testing tool that leverages multiple Large Language Models (LLMs) to perform intelligent, context-aware API security assessments. Unlike traditional tools that rely on pattern matching, this platform uses AI to understand logic, predict vulnerabilities, and generate sophisticated attack scenario.
Comprehensive security scanner for Model Context Protocol (MCP) servers
Objective-driven adversarial testing framework for GenAI systems aligned with OWASP GenAI Top 10 risks.
The most comprehensive open-source mapping of OWASP GenAI risks to industry frameworks — 37 files, 16 frameworks, 3 source lists: LLM Top 10, Agentic Top 10, DSGAI 2026. OT/ICS, EU AI Act, NIST, ISO 27001, ISO 42001, CIS, SAMM, ENISA, NHI, AIVSS.
Towards Automating Data Access Permissions in AI Agents
An In-Depth Investigation of Data Collection in LLM App Ecosystems
A GenAI agent and tool registry system to securely vend scoped down JIT credentials
LLM Sentinel Red Teaming Platform is an enterprise-grade framework for automated security testing of Large Language Models, detecting vulnerabilities such as jailbreaks, prompt injection, and system prompt leakage across multiple providers, with structured attack orchestration, risk scoring, and security reporting to harden models before production
AI agent runtime governance control plane: intercept tool calls with PII protection, approvals, and formal verification.
ZeroShield AI Mesh Firewall is a centralized AI security gateway for governing LLM and RAG traffic with inline prompt injection defense, vector database isolation, multi model routing control, and compliance grade observability.
Agentic AI Request Forgery (AARF) – New vulnerability class exploiting planner ➝ memory ➝ plugin chaining in MCP Server, MAS, LangChain, and A2A agents. Red Team playbooks, threat models, OWASP Top 10 proposal.
A curated list of awesome resources for AI system security.
TrustLayer — Security reviews, threat models, and control patterns for GenAI systems
Practical security patterns for GenAI, LLM, and AI workloads across GCP, AWS, and Azure.
Add a description, image, and links to the genai-security topic page so that developers can more easily learn about it.
To associate your repository with the genai-security topic, visit your repo's landing page and select "manage topics."