Trust Assessment
context-recovery received a trust score of 73/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 2 findings: 0 critical, 2 high, 0 medium, and 0 low severity. Key findings include Potential Command Injection via Unsanitized Keyword in `grep`, Potential Prompt Injection via Unsanitized User Input in Recovered Context.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on February 14, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings2
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| HIGH | Potential Command Injection via Unsanitized Keyword in `grep` The skill instructs to extract keywords from channel history and use them directly in a `grep` command. If these keywords are derived from untrusted user input without proper sanitization, an attacker could inject shell commands. For example, if a keyword contains `"; rm -rf /; #`, it could lead to arbitrary command execution on the agent's host system. Implement robust sanitization or escaping of the `<keyword>` variable before it is used in the `grep` command. Consider using a safer method for searching, such as a dedicated search library or ensuring the keyword is properly quoted/escaped for shell execution. | LLM | SKILL.md:92 | |
| HIGH | Potential Prompt Injection via Unsanitized User Input in Recovered Context The skill synthesizes and caches 'Recovered Context' which includes elements derived from untrusted user input (e.g., 'Recent user requests', 'Last User Request', 'project/task summary', 'incomplete actions'). This context is then used to formulate the agent's response and is cached in memory files. If this user-derived content is not properly sanitized or escaped before being incorporated into the LLM's prompt, an attacker could inject instructions to manipulate the host LLM's behavior. Implement robust sanitization and escaping of all user-derived content before it is incorporated into the 'Recovered Context' markdown. This includes escaping markdown special characters and potentially using specific LLM-prompting techniques (e.g., XML tags, JSON blocks) to clearly delineate user input from system instructions. | LLM | SKILL.md:100 |
Scan History
Embed Code
[](https://skillshield.io/report/e6108cae4db8b950)
Powered by SkillShield