AI Security
26 tools found
Deep Live Cam
Deepfake software for AI-generated media creation and realistic adversary simulations.
GuardRail OSS
Framework enhancing AI outputs via conditional logic and emotional intelligence (AiEQ).
WiFi DensePose
Real-time, camera-free human pose detection using WiFi CSI and machine learning.
Agent Name Service
Secure AI agent registry based on OWASP GenAI ANS Protocol for safe agent interaction.
DSPy.ts
AI framework in JS/TS for building smart, private apps directly in the browser.
FACT
MCP tool replacing vectors with prompts for fast, auditable LLM-powered data retrieval.
Ultrasonic Agentics
Framework to hide secret commands and data in audio/video using secure steganography.
CAI
Lightweight framework to build cybersecurity AIs (CAIs), optimized for bug bounty hunting and vulnerability analysis.
RECONWITHME
AI assistant designed to answer interactive queries related to cybersecurity topics.
Gemini CLI
Gemini CLI automates tasks, builds apps, and interacts with code using multimodal AI.
feedly
AI-powered threat intel platform for faster OSINT collection, analysis, and sharing.
Jarvis-GPT
Interacts with ChatGPT by voice, performs computer commands, and plays music.
PentestGPT
GPT-enhanced penetration testing tool, focused on AI-powered cybersecurity.
VectorSmuggle
POC for data exfiltration via embeddings in AI systems using RAG models.
LitterBox
Controlled sandbox to test and analyze payloads with LLM-assisted insights.
Auto Red Team
Uses GPT-4 to generate prompts that bypass GPT-3.5 safety restrictions.
Red Teaming LLM
Adapted code to test LLMs and find flaws using Azure OpenAI endpoints.
AI Agents Attack Matrix
TTP matrix for attacking generative AI agents and autonomous systems.
AI LLM-C2-Server
C2 server with integrated LLM to enhance adversary simulation via AI.
Spell Whisperer
Interactive prompt injection challenge using Grok or other LLM APIs.
Broken Hill
Broken Hill is a productionized, ready-to-use automated attack tool that generates crafted prompts to bypass restrictions in large language models (LLMs) using the greedy coordinate gradient (GCG) attack.
RAINK
There's power in AI in that you can "throw a problem at it" and get some result, without even fully defining the problem.
ART
Adversarial Robustness Toolbox (ART) is a Python library for Machine Learning Security. ART is hosted by the Linux Foundation AI & Data Foundation (LF AI & Data).