Trust Assessment
openspec received a trust score of 82/100, placing it in the Mostly Trusted category. This skill has passed most security checks with only minor considerations noted.
SkillShield's automated analysis identified 2 findings: 0 critical, 1 high, 1 medium, and 0 low severity. Key findings include Unpinned dependency in global installation, Potential command injection through unsanitized CLI arguments.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on February 13, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings2
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| HIGH | Unpinned dependency in global installation The skill instructs the agent to install the `@fission-ai/openspec` package globally using `npm install -g @fission-ai/openspec@latest`. Using `@latest` means the dependency version is not pinned, which introduces a supply chain risk. A malicious update to the package could be automatically installed, leading to unexpected behavior or the introduction of compromised code. Pin the dependency to a specific, known-good version (e.g., `npm install -g @fission-ai/openspec@1.2.3`) to ensure consistent and secure installations. Regularly review and update the pinned version. | LLM | SKILL.md:10 | |
| MEDIUM | Potential command injection through unsanitized CLI arguments The skill instructs the AI agent to construct and execute `openspec` commands, such as `openspec new change <name>`, where `<name>` is expected to be derived from untrusted user input. If the `openspec` CLI tool does not properly sanitize or escape this user-provided input before using it in an internal shell command (e.g., creating a directory or executing a subprocess), an attacker could inject shell metacharacters (e.g., `;`, `|`, `&`, `$(...)`) to execute arbitrary commands on the host system. Implement strict validation and sanitization for all user-provided input before passing it as arguments to `openspec` commands. The `openspec` tool itself should also employ robust input sanitization and avoid unsafe shell execution patterns (e.g., `shell=True` in Python's `subprocess` module) when processing untrusted input. | LLM | SKILL.md:24 |
Scan History
Embed Code
[](https://skillshield.io/report/28f259bc3344973f)
Powered by SkillShield