AIsbom is a specialized security and compliance scanner for Machine Learning artifacts.
Unlike generic SBOM tools that only parse requirements.txt, AIsbom performs Deep Binary Introspection on model files (.pt, .pkl, .safetensors, .gguf) to detect malware risks and legal license violations hidden inside the serialized weights.
Install directly from PyPI. No cloning required.
pip install aisbom-cliNote: The package name is aisbom-cli, but the command you run is aisbom.
Point it at any directory containing your ML project. It scans recursively for requirements files AND binary model artifacts.
aisbom scan ./my-project-folderYou will see a combined Security & Legal risk assessment in your terminal:
π§ AI Model Artifacts Found
βββββββββββββββββββββββ³ββββββββββββββ³βββββββββββββββββββββββ³ββββββββββββββββββββββββββββββ
β Filename β Framework β Security Risk β Legal Risk β
β‘βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ©
β bert_finetune.pt β PyTorch β CRITICAL (RCE Found) β UNKNOWN β
β safe_model.st β SafeTensors β LOW β UNKNOWN β
β restricted_model.st β SafeTensors β LOW β LEGAL RISK (cc-by-nc-4.0) β
β llama-3-quant.gguf β GGUF β LOW β LEGAL RISK (cc-by-nc-sa) β
βββββββββββββββββββββββ΄ββββββββββββββ΄βββββββββββββββββββββββ΄ββββββββββββββββββββββββββββββ
A compliant sbom.json (CycloneDX v1.6) including SHA256 hashes and license data will be generated in your current directory.
Scan models directly on Hugging Face without downloading terabytes of weights. We use HTTP Range requests to inspect headers over the wire.
aisbom scan hf://google-bert/bert-base-uncased- Speed: Scans in seconds, not minutes.
- Storage: Zero disk usage.
- Security: Verify "SafeTensors" compliance before you even
git clone.
Detect "Silent Regressions" in your AI Supply Chain. The diff command compares your current SBOM against a known baseline JSON.
aisbom diff baseline_sbom.json new_sbom.jsonDrift Analysis Output:
βββββββββββββββββ³βββββββββββ³ββββββββββ³βββββββββββββββββ³ββββββββββββββββββ³βββββββββββββββββ
β Component β Type β Change β Security Risk β Legal Risk β Details β
β‘βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ©
β drift-risk.pt β Modified β DRIFT β LOW -> β - β β
β β β β CRITICAL β β β
β drift-license β Modified β DRIFT β - β UNKNOWN -> β Lic: MIT -> β
β β β β β LEGAL RISK β CC-BY-NC β
β drift-hash.pt β Modified β DRIFT β INTEGRITY FAIL β - β Hash: ... β
βββββββββββββββββ΄βββββββββββ΄ββββββββββ΄βββββββββββββββββ΄ββββββββββββββββββ΄βββββββββββββββββ
It enforces Quality Gates by exiting with code 1 if:
- A new CRITICAL risk is introduced.
- A Component's risk level escalates (e.g., LOW -> CRITICAL).
- Hash Drift: A verified file has been tampered with (Marked as INTEGRITY FAIL).
For high-security environments, switch from "Blocklisting" (looking for malware) to "Allowlisting" (blocking everything unknown).
aisbom scan model.pkl --strictThis will report any import that is not in the safe-list.
Allowed Libraries: torch (and submodules), numpy, collections, typing, datetime, re, pathlib, copy, functools, dataclasses, uuid.
Allowed Builtins: dict, list, set, tuple, int, float, str, bytes, etc., etc.).
- Flags any unknown global import as
CRITICAL.
Generate a GitHub-flavored Markdown report suitable for Pull Request comments.
aisbom scan . --format markdown --output report.mdGenerate a standard SPDX 2.3 JSON Software Bill of Materials.
aisbom scan . --format spdx --output sbom.spdx.jsonAdd AIsbom to your GitHub Actions pipeline.
Behavior: The scanner returns exit code 1 if Critical risks are found, automatically blocking the build/merge.
name: AI Security Scan
on: [pull_request]
jobs:
aisbom-scan:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Scan AI Models
uses: Lab700xOrg/aisbom@v0
with:
directory: '.'Don't like reading JSON? You can visualize your security posture using our offline viewer.
- Run the scan to generate
sbom.json. - Go to aisbom.io/viewer.html.
- Drag and drop your JSON file.
- Get an instant dashboard of risks, license issues, and compliance stats.
Note: The viewer is client-side only. Your SBOM data never leaves your browser.
AI models are not just text files; they are executable programs and IP assets.
- The Security Risk: PyTorch (
.pt) files are Zip archives containing Pickle bytecode. A malicious model can execute arbitrary code (RCE) instantly when loaded. - The Legal Risk: A developer might download a "Non-Commercial" model (CC-BY-NC) and deploy it to production. Since the license is hidden inside the binary header, standard tools miss it.
- The Solution: We look inside. We decompile bytecode and parse internal metadata headers without loading the heavy weights into RAM.
Security tools require trust. We do not distribute malicious binaries.
However, AIsbom includes a built-in generator so you can create safe "mock artifacts" to verify the scanner works.
1. Install:
pip install aisbom-cli2. Generate Test Artifacts: Run this command to create a mock "Pickle Bomb" and a "Restricted License" model in your current folder.
aisbom generate-test-artifactsResult: Files named mock_malware.pt, mock_restricted.safetensors, and mock_restricted.gguf are created.
3. Scan them:
aisbom scan .Result: You will see mock_malware.pt flagged as CRITICAL and (mock_restricted.safetensors, mock_restricted.gguf) as LEGAL RISK.
AIsbom uses a static analysis engine to disassemble Python Pickle opcodes. It looks for specific GLOBAL and STACK_GLOBAL instructions that reference dangerous modules:
os/posix(System calls)subprocess(Shell execution)builtins.eval/exec(Dynamic code execution)socket(Network reverse shells)
