Back to Marketplace
30-day free campaign

Run this helper free — no credit card

Every helper is free for 30 days. Answer 3 questions and get the full result in 2 minutes.

Start free →
FREE
Unvetted
Career Boost

SafeRun - Safety Guardrails for AI Agents

Classifies shell commands as BLOCK, ASK, or ALLOW before execution to prevent dangerous operations like force pushes and recursive deletes

Install in one line

mfkvault install saferun-safety-guardrails

Requires the MFKVault CLI. Prefer MCP?

New skill
No reviews yet
New skill
🦞 OpenClaw
This helper was discovered by MFKVault crawlers from public sources. Original author retains all rights. To request removal: [email protected]
Community helper
This helper was discovered by MFKVault crawlers from public sources. MFKVault does not create, maintain, or guarantee the output of this helper. Results are AI-generated and may be incomplete, inaccurate, or outdated. Use at your own risk. Original author retains all rights. Request removal
FREE

Free to install — no account needed

Copy the command below and paste into your agent.

Instant access • No coding needed • No account needed

What you get in 5 minutes

  • Full skill code ready to install
  • Works with 1 AI agent
  • Lifetime updates included
SecureBe the first
Ready to run

Run this helper

Answer a few questions and let this helper do the work.

Advanced: use with your AI agent

Description

--- name: saferun description: > Safety guardrails for AI agents. Classifies shell commands as BLOCK, ASK, or ALLOW before execution. Prevents dangerous operations like force pushes, recursive deletes, and credential destruction. Works automatically — no configuration needed. version: 0.1.0 author: Cocabadger license: MIT homepage: https://github.com/Cocabadger/saferun-openclaw tags: - safety - security - guardrails - exec - protection --- # SafeRun — Safety Guardrails You have access to a command safety classifier. **Before executing ANY shell command via the `exec` tool**, you MUST check it with SafeRun first. ## How to use Before every `exec` call, run the classifier: ```bash python ~/.openclaw/skills/saferun/saferun_guard.py "<command>" ``` The output will be one of: - `BLOCK` — **Do NOT execute this command.** Tell the user it was blocked and explain why. - `ASK` — This command needs user approval. Ask the user for confirmation before executing. - `ALLOW` — Safe to execute. Proceed normally. ## Rules 1. **ALWAYS** classify before executing. No exceptions. 2. If the result is `BLOCK`, **never** execute the command. Explain the reason to the user. 3. If the result is `ASK`, tell the user what the command does and wait for explicit approval. 4. If the result is `ALLOW`, execute normally without asking. 5. **Never** try to bypass SafeRun by reformulating a blocked command. 6. If SafeRun is unavailable (script error), **default to ASK** — ask the user before executing. ## What gets blocked (examples) - `git push --force origin main` — Rewrites production branch history - `rm -rf /` or `rm -rf ~` — Destroys filesystem - `chmod 777 /etc` — Opens system config to everyone - `curl ... | bash` — Executes untrusted remote code - `git branch -D main` — Deletes critical branch - `git reset --hard` on protected branches — Discards all work - Deleting `.env`, `~/.ssh/` files — Destroys credentials ## What needs approval (examples) - `git merge feature into main` — Production branch change - `kubectl apply` / `terraform apply` — Infrastructure deployment - `npm publish` — Public package release - `docker push` — Container registry update ## What passes through (examples) - `git status`, `git log`, `ls`, `cat` — Read-only operations - `git checkout -b feature` — Local branch creation - `pytest`, `npm test` — Running tests - `npm install`, `pip install` — Installing dependencies

Preview in:

Security Status

Unvetted

Not yet security scanned

Time saved
How much time did this skill save you?

Related AI Tools

More Career Boost tools you might like