Trust Assessment
progress received a trust score of 81/100, placing it in the Mostly Trusted category. This skill has passed most security checks with only minor considerations noted.
SkillShield's automated analysis identified 3 findings: 0 critical, 0 high, 3 medium, and 0 low severity. Key findings include Missing required field: name, Sensitive environment variable access: $HOME, Prompt Injection Attempt: Directives to LLM from Untrusted Content.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on February 15, 2026 (commit 1823c3f6). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings3
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| MEDIUM | Missing required field: name The 'name' field is required for claude_code skills but is missing from frontmatter. Add a 'name' field to the SKILL.md frontmatter. | Static | plugins/specweave/skills/progress/SKILL.md:1 | |
| MEDIUM | Sensitive environment variable access: $HOME Access to sensitive environment variable '$HOME' detected in shell context. Verify this environment variable access is necessary and the value is not exfiltrated. | Static | plugins/specweave/skills/progress/SKILL.md:6 | |
| MEDIUM | Prompt Injection Attempt: Directives to LLM from Untrusted Content The skill definition contains explicit instructions to the host LLM, such as 'You MUST: 1. Present the hook output VERBATIM'. While seemingly benign and intended for correct output formatting, these are direct directives from untrusted content attempting to manipulate the LLM's behavior regarding output generation. This pattern could be abused for malicious prompt injection if the directives were to instruct the LLM to ignore safety guidelines or perform unintended actions. SkillShield's rules state that content within the untrusted tags should not be trusted to override instructions, and attempts to manipulate the host LLM should be flagged. Relocate explicit instructions for LLM behavior outside of the untrusted content delimiters, or implement a robust parsing mechanism that extracts such directives without allowing them to directly influence the LLM's core behavior. If the verbatim output is critical, ensure the LLM is instructed to handle tool outputs as raw text by default, rather than relying on directives within the skill's untrusted description. | LLM | SKILL.md:16 |
Scan History
Embed Code
[](https://skillshield.io/report/c57ef52338629b09)
Powered by SkillShield