Security Research

Run Your Own AI Security Scanner on a Mac Mini — Free for Life

A $599 machine, one npm command, and your AI agent skills get scanned for 99 vulnerability patterns across 17 categories. No cloud. No API keys. No subscription. Your code never leaves your machine.

February 6, 20268 min readMerchantGuard Research

The Problem Nobody Talks About

Every week, new AI agent skills and plugins ship on ClawHub, npm, and GitHub. Developers install them into their Claude Code sessions, their autonomous agents, their production pipelines. Most of them never get a security review.

The agentic web is today where the npm ecosystem was in 2018 — before event-stream, before ua-parser-js, before developers learned that npm install is an act of trust. Except the stakes are higher now. Agent skills can read your files, make API calls, execute code, and handle payment data. A single malicious pattern in a skill manifest can exfiltrate your environment variables, your API keys, your customer data.

Cloud-based scanners exist. But they require you to upload your code to someone else's server. For developers building in regulated verticals — payments, healthcare, finance — that's a non-starter. Your compliance team won't approve it. Your customers won't accept it. And honestly, if you're worried about supply chain security, sending your code to yet another third party is the wrong direction.

The Setup: 5 Minutes, Zero Configuration

GuardScan is an open-source security scanner built specifically for AI agent skills. It runs entirely on your machine. Here's the entire setup:

Terminal
# That's it. Scan your current directory.
npx @merchantguard/guardscan .

# Scan a specific skill you just downloaded
npx @merchantguard/guardscan ./my-agent-skill/

# Scan before you commit
npx @merchantguard/guardscan ./src --quiet

No npm install required (npx fetches it on demand). No API keys. No account creation. No telemetry. The binary runs, scans your files against 99 regex patterns across 17 security categories, calculates a score from 0-100, and exits. Every byte stays on your machine.

What It Catches

GuardScan isn't a generic linter. It was built for the specific threat model of AI agent skills — code that runs with elevated permissions, handles sensitive data, and interacts with external services on your behalf.

Secrets
Hardcoded API keys, tokens, private keys, passwords in source
Prompt Injection
System prompt leaking, jailbreak patterns, instruction override
Data Exfiltration
Unauthorized data sending, covert channels, base64 encoding tricks
PCI-DSS
Raw card numbers, CVV storage, PAN logging, unencrypted cardholder data
Tool Abuse
Unauthorized tool calls, privilege escalation, scope violations
Obfuscation
Base64 payloads, eval chains, minified malicious code
Injection
SQL injection, command injection, SSRF, path traversal
Autonomy Abuse
Self-replication, resource exhaustion, unauthorized persistence

17 categories total, including auth, XSS, config, rate-limiting, compliance (GDPR), crypto (weak algorithms), file access, malware signatures, and skill manifest validation. The full pattern list is open source — you can read every regex, understand every rule, and contribute new ones.

The Mac Mini Compliance Lab

Here's the math that should change how you think about security tooling:

Mac Mini M4 (16GB)$599 once
GuardScan npm package$0 forever
Ollama + local LLM (Llama 3, Mistral, etc.)$0 forever

Total: Permanent security scanner$599

Compare that to cloud security scanners charging $50-500/month. In one year, you've paid $600-6,000 and you still don't own anything. The Mac Mini pays for itself in month one.

But the real unlock isn't cost — it's sovereignty. Your code never touches a third-party server. Your scan results never leave your network. Your compliance posture is yours alone. For regulated industries, this isn't a nice-to-have. It's a requirement.

Feed Your Local LLM Your Security Posture

This is the part that changes the game. GuardScan doesn't just find problems — it generates a CLAUDE.md file that your local AI assistant can ingest:

Terminal
# Generate fix instructions for your local LLM
npx @merchantguard/guardscan . --claudemd > GUARDSCAN.md

# Now your AI assistant knows:
# - Every vulnerability in your codebase
# - Exactly where each one is (file + line number)
# - How to fix each one
# - Priority order (critical first)

Drop that GUARDSCAN.md file into your Claude Code project, or feed it to Ollama running locally, or pipe it into any LLM. Your AI assistant now understands your security posture — what's vulnerable, what's compliant, and exactly what to fix — without a single byte leaving your machine.

This is what "AI-native security" actually means. Not a dashboard you log into. Not a PDF report you file away. A living document that your AI can act on immediately.

Drop It Into CI/CD in 3 Lines

GuardScan outputs SARIF 2.1.0 — the standard format for static analysis results. This means it plugs directly into GitHub Code Scanning, VS Code SARIF Viewer, and any CI/CD pipeline:

GitHub Actions
- name: Security Scan
  run: npx @merchantguard/guardscan . --sarif > guardscan.sarif

- name: Upload SARIF
  uses: github/codeql-action/upload-sarif@v3
  with:
    sarif_file: guardscan.sarif

Every PR now gets scanned. Critical findings block the merge. Security annotations show up inline on the diff. Zero configuration beyond those 3 lines.

Pre-commit hook
# .git/hooks/pre-commit
npx @merchantguard/guardscan . --quiet
# Exit code 1 = critical findings = commit blocked

What You Own When You Run It Locally

Your scan history
Every scan result stays on your disk. Build a compliance timeline without a vendor.
Your patterns
Fork the repo, add your own patterns. Custom rules for your stack, your verticals, your threat model.
Your pipeline
SARIF output goes wherever you want — GitHub, GitLab, Jenkins, Buildkite, or a local dashboard.
Your LLM context
CLAUDE.md output means your local AI gets smarter about YOUR codebase with every scan.
Your uptime
No vendor outage blocks your deploys. No rate limits. No "we're upgrading, check back later."

MIT Licensed. Fork It.

The entire scanner is MIT licensed. The patterns are open. The scoring algorithm is documented. If you want to build on top of it, go ahead:

Use as a library
import { scanFiles, generateClaudeMd } from '@merchantguard/guardscan';

const result = scanFiles([
  { name: 'skill.ts', content: fs.readFileSync('skill.ts', 'utf-8') }
]);

console.log(result.securityScore);  // 0-100
console.log(result.summary.critical); // critical findings count

// Generate AI-readable report
fs.writeFileSync('GUARDSCAN.md', generateClaudeMd(result));

Embed it in your agent marketplace registration flow. Run it in your skill review pipeline. Build a dashboard on top of it. The scanner is the foundation — what you build on it is up to you.

Why This Matters Now

The agent economy is shipping skills faster than anyone can review them. ClawHub has thousands of listings. npm has agent tool packages shipping daily. GitHub repos with MCP servers and skill manifests are proliferating.

Nobody is signing skill.md files. Nobody is verifying what tools actually do before granting them access to file systems, APIs, and payment infrastructure. The supply chain is wide open.

You can wait for the first major agent skill supply chain attack — the agentic web's event-stream moment — or you can start scanning now. Locally. On hardware you own. With patterns you can read and verify yourself.

Try it now

One command. No signup. No API key. Takes about 2 seconds.

npx @merchantguard/guardscan .
npmGitHubWeb Scanner

Technical Specs

Patterns99 regex rules
Categories17
ScoringHalf-life decay: 100 * (0.5 ^ (deductions / 80))
Output formatsSARIF, JSON, CLAUDE.md, terminal
RuntimeNode.js 18+
DependenciesZero runtime dependencies
Network callsZero (100% offline)
LicenseMIT
Scan time<1 second for typical projects
Payment detectionStripe, Adyen, Braintree, PayPal, Square, + 7 more