Trust Assessment
internal-comms received a trust score of 85/100, placing it in the Mostly Trusted category. This skill has passed most security checks with only minor considerations noted.
SkillShield's automated analysis identified 1 finding: 0 critical, 1 high, 0 medium, and 0 low severity. Key findings include Potential Path Traversal in File Loading Instruction.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on February 20, 2026 (commit 27904475). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings1
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| HIGH | Potential Path Traversal in File Loading Instruction The skill instructs the LLM to 'Load the appropriate guideline file from the `examples/` directory'. If the underlying file reading mechanism or tool used by the LLM does not implement strict path validation and sandboxing, a malicious user could craft a prompt to manipulate the file path (e.g., using `../` sequences) to access and potentially exfiltrate sensitive files outside the intended `examples/` directory (e.g., `/etc/passwd`, configuration files, or other skill assets). This represents a data exfiltration risk. Implement robust input validation and sandboxing for any file reading operations performed by the LLM. Ensure that file paths derived from user input are strictly confined to the intended `examples/` directory. Specifically, sanitize or reject any directory traversal sequences (e.g., `../`) in the requested file path. The LLM's internal instructions should explicitly state that only files within `examples/` are accessible and any attempts to access files outside this scope should be rejected. | LLM | SKILL.md:15 |
Scan History
Embed Code
[](https://skillshield.io/report/e8900e3d4e82dfa4)
Powered by SkillShield