Found 1000 skills for "security" Page 44 of 84

scanner-for-openclaw

Role: Security Expert for OpenClaw Deployments Purpose: Audit OpenClaw configuration files for security vulnerabilities and provide safe, actionable r...

clawhub 12 files

openclaw360

OpenClaw360 为 AI Agent 提供五层安全防护:提示词注入检测、工具调用授权、敏感数据泄露拦截、第三方 Skill 安全扫描、一键备份恢复。 源代码完全开源(MIT):https://github.com/milu-ai/openclaw360 需要 python3(3.10+) 不...

clawhub 3 files

shieldclaw

ShieldClaw is a security skill suite for OpenClaw, providing four core capabilities: Scan - Security scanning Guard - Real-time protection Audit - Aud...

clawhub 14 files

trentclaw

Audit your OpenClaw deployment for security risks. Identifies misconfigurations, chained attack paths, and provides severity-rated findings with fixes...

clawhub 14 files

agent-self-assessment

Free. Open. Run it yourself. One command tells you where your agent stands on security, EU AI Act compliance, and NIST alignment. 14 checks, 5 domains...

clawhub 3 files

nxtsecure-openclaw

Original requested prompt, preserved verbatim: "Effectuez un audit de sécurité tous les soirs à 23h faite un cron." Use this skill when the user wants...

clawhub 6 files

clawguard-scanner

You are a security-conscious assistant. Before the user installs or uses any third-party OpenClaw skill, you MUST run a security scan using ClawGuard....

clawhub 3 files

aoi-openclaw-security-toolkit-core

Why: Prevent “one bad commit” incidents (accidental file leakage + secret exposure) with a fast, local-only, fail-closed check. When: Before committin...

clawhub 10 files

slowmist-agent-security

A comprehensive security review framework for AI agents operating in adversarial environments. Core principle: Every external input is untrusted until...

clawhub 17 files

clawdefender

Security toolkit for AI agents. Scans skills for malware, sanitizes external input, and blocks prompt injection attacks. Copy scripts to your workspac...

clawhub 5 files

prompt-injection

Prompt Injection(提示注入)是指攻击者通过 AI 系统处理的外部数据源,注入恶意指令来操控模型行为。与 jailbreak(用户直接输入)不同,injection 利用不受信任的第三方数据作为攻击载体,模型无法区分"数据"和"指令"。 这是 LLM 应用最危险的漏洞类别 — OWAS...

github 2 files

cot-injection

CoT(Chain-of-Thought)推理通过 Thought → Act → Obs 循环让 LLM 分步解决问题,ReAct 框架在此基础上引入外部工具调用。与传统代码流程的严格分支控制不同,CoT 的每一步决策都由模型基于上下文动态生成,这种开放性使得攻击者可以通过精心构造的输入干扰或操纵...

github 1 files