Skip to content
#

adversarial-ml

Here are 20 public repositories matching this topic...

Noise Injection Techniques provides a comprehensive exploration of methods to make machine learning models more robust to real-world bad data. This repository explains and demonstrates Gaussian noise, dropout, mixup, masking, adversarial noise, and label smoothing, with intuitive explanations, theory, and practical code examples.

  • Updated Nov 15, 2025

Comprehensive taxonomy of AI security vulnerabilities, LLM adversarial attacks, prompt injection techniques, and machine learning security research. Covers 71+ attack vectors including model poisoning, agentic AI exploits, and privacy breaches.

  • Updated Sep 19, 2025

Complete 90-day learning path for AI security: ML fundamentals → LLM internals → AI threats → Detection engineering. Built from first principles with NumPy implementations, Jupyter notebooks, and production-ready detection systems.

  • Updated Dec 30, 2025

Complete AI-powered security training program with 25 hands-on labs, 15 CTF challenges, and enterprise integrations. Learn ML-based threat detection, LLM security analysis, adversarial ML, and cloud security. From beginner to expert.

  • Updated Dec 30, 2025
  • Python

Improve this page

Add a description, image, and links to the adversarial-ml topic page so that developers can more easily learn about it.

Curate this topic

Add this topic to your repo

To associate your repository with the adversarial-ml topic, visit your repo's landing page and select "manage topics."

Learn more