Trust Assessment
conclave received a trust score of 83/100, placing it in the Mostly Trusted category. This skill has passed most security checks with only minor considerations noted.
SkillShield's automated analysis identified 2 findings: 0 critical, 1 high, 1 medium, and 0 low severity. Key findings include Shell command in skill description could lead to command injection and credential exposure, Shell command in skill description could lead to command injection.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on February 13, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings2
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| HIGH | Shell command in skill description could lead to command injection and credential exposure The skill description contains a shell command (`echo "sk_..." > .conclave-token && chmod 600 .conclave-token`) intended for user setup. If an AI agent is configured to parse and execute shell code snippets directly from skill documentation (which is treated as untrusted input), this could lead to command injection. Specifically, this command writes a sensitive API token to a file, which could be exploited if the token is dynamically generated or retrieved by the agent and then written, or if the command is altered by a prompt injection. 1. Avoid including executable shell commands directly within skill documentation that is processed by an AI agent. 2. If shell commands are necessary for setup, present them in a way that clearly distinguishes them as manual steps for the operator, not for agent execution (e.g., in a separate `README.md` or a dedicated setup script). 3. Implement strict sandboxing and explicit tool calls for any shell execution capabilities within the AI agent to prevent arbitrary command execution from untrusted input. 4. For token management, rely on secure environment variable injection or dedicated credential management tools rather than instructing the agent to write files. | LLM | skill.md:40 | |
| MEDIUM | Shell command in skill description could lead to command injection The skill description contains a shell command (`curl -X POST https://api.conclave.sh/register ...`) intended for user setup. If an AI agent is configured to parse and execute shell code snippets directly from skill documentation (which is treated as untrusted input), this could lead to command injection. While this specific command performs a registration to a legitimate API, the capability to execute arbitrary `curl` commands from untrusted input is a security risk, as it could be manipulated to exfiltrate data to arbitrary endpoints. 1. Avoid including executable shell commands directly within skill documentation that is processed by an AI agent. 2. If shell commands are necessary for setup, present them in a way that clearly distinguishes them as manual steps for the operator, not for agent execution (e.g., in a separate `README.md` or a dedicated setup script). 3. Implement strict sandboxing and explicit tool calls for any shell execution capabilities within the AI agent to prevent arbitrary command execution from untrusted input. | LLM | skill.md:18 |
Scan History
Embed Code
[](https://skillshield.io/report/7eb9a65b3717f17d)
Powered by SkillShield