Trust Assessment
feature-forge received a trust score of 86/100, placing it in the Mostly Trusted category. This skill has passed most security checks with only minor considerations noted.
SkillShield's automated analysis identified 1 finding: 0 critical, 1 high, 0 medium, and 0 low severity. Key findings include Potential Path Traversal in Output File Naming.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on February 11, 2026 (commit 3d5e297b). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings1
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| HIGH | Potential Path Traversal in Output File Naming The skill instructs the LLM to save output to `specs/{feature_name}.spec.md`. If the `{feature_name}` variable is directly derived from untrusted user input without proper sanitization, an attacker could inject path traversal sequences (e.g., `../`, `../../`) into the feature name. This could allow the LLM to write files outside the intended `specs/` directory, potentially overwriting critical system files, writing malicious content to other directories, or exfiltrating data by writing it to a controlled location. Implement strict sanitization and validation for the `feature_name` variable to prevent path traversal characters (e.g., `/`, `\`, `..`). Ensure that the final file path is canonicalized and checked against an allowed base directory before any file write operations. The underlying tool or environment should enforce strict sandboxing for file operations to prevent writes outside designated directories. | LLM | SKILL.md:62 |
Scan History
Embed Code
[](https://skillshield.io/report/9a22807fb4ed8fad)
Powered by SkillShield