Trust Assessment
openclaw-skill-auditor received a trust score of 80/100, placing it in the Mostly Trusted category. This skill has passed most security checks with only minor considerations noted.
SkillShield's automated analysis identified 4 findings: 0 critical, 0 high, 3 medium, and 1 low severity. Key findings include Missing required field: name, Sensitive environment variable access: $HOME, Untrusted code fed to external LLM may allow prompt injection.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on February 13, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings4
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| MEDIUM | Missing required field: name The 'name' field is required for claude_code skills but is missing from frontmatter. Add a 'name' field to the SKILL.md frontmatter. | Static | skills/sypsyp97/openclaw-skill-auditor/SKILL.md:1 | |
| MEDIUM | Sensitive environment variable access: $HOME Access to sensitive environment variable '$HOME' detected in shell context. Verify this environment variable access is necessary and the value is not exfiltrated. | Static | skills/sypsyp97/openclaw-skill-auditor/scripts/audit.sh:87 | |
| MEDIUM | Untrusted code fed to external LLM may allow prompt injection The `audit.sh` script passes potentially malicious or untrusted code snippets (from other skills being audited) directly into a prompt for the `gemini` CLI. Although the untrusted code is enclosed in triple backticks, sophisticated prompt injection techniques could manipulate the Gemini LLM's behavior, leading to biased analysis, generation of harmful content, or an incorrect 'SAFE' verdict for malicious code. This compromises the integrity of the audit process performed by this skill. Implement stronger sanitization or sandboxing for the `$SUSPICIOUS_CODE` before feeding it to the LLM. Consider using a dedicated LLM API with structured input for code analysis rather than raw prompt concatenation. Explicitly instruct the LLM within the prompt to ignore any instructions found within the code block and to focus solely on analysis. | LLM | scripts/audit.sh | |
| LOW | Reliance on external `gemini` CLI introduces supply chain risk The `audit.sh` script depends on the external `gemini` CLI tool for its LLM analysis layer. A compromise of the `gemini` CLI itself, or the installation of a malicious lookalike, could undermine the security analysis provided by this skill. This introduces a supply chain risk as the integrity of the audit relies on an external, unmanaged dependency. Ensure `gemini` CLI is installed from trusted sources and its integrity is verified (e.g., via checksums). Consider sandboxing the execution environment for external tools to limit potential damage from a compromised dependency. Document the exact version of `gemini` CLI expected. | LLM | scripts/audit.sh |
Scan History
Embed Code
[](https://skillshield.io/report/57a4cd4c823fe609)
Powered by SkillShield