Trust Assessment
Security Specialist received a trust score of 86/100, placing it in the Mostly Trusted category. This skill has passed most security checks with only minor considerations noted.
SkillShield's automated analysis identified 1 finding: 0 critical, 1 high, 0 medium, and 0 low severity. Key findings include Direct instruction for tool execution (Command Injection).
The analysis covered 4 layers: dependency_graph, manifest_analysis, llm_behavioral_safety, static_code_analysis. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on February 12, 2026 (commit 78ae406e). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings1
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| HIGH | Direct instruction for tool execution (Command Injection) The skill explicitly instructs the host LLM to 'run `cargo audit`'. If the LLM has access to a shell or a tool execution environment, this constitutes a command injection vulnerability. An attacker could potentially modify this instruction to execute malicious commands if they can control the skill definition or the input to the skill. Even with the 'In future' qualifier, it's a direct command to the LLM. Remove direct command execution instructions from the skill definition. Instead, define a structured tool or capability for `cargo audit` that the LLM can *choose* to use, and ensure that tool is properly sandboxed and its arguments are validated. If `cargo audit` is intended to be a built-in capability, it should be invoked through a structured tool call, not a natural language instruction within the skill's prompt. | Unknown | SKILL.md:9 |
Scan History
Embed Code
[](https://skillshield.io/report/4816f2e124a67f05)
Powered by SkillShield