Security Audit
jnMetaCode/superpowers-zh:skills/executing-plans
github.com/jnMetaCode/superpowers-zhTrust Assessment
jnMetaCode/superpowers-zh:skills/executing-plans received a trust score of 65/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 6 findings: 2 critical, 4 high, 0 medium, and 0 low severity. Key findings include Execution of Untrusted 'Plan File' Leads to Prompt Injection, Execution of Untrusted 'Plan File' Leads to Command Injection, Untrusted Plan File Can Lead to Data Exfiltration.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 0/100, indicating areas for improvement.
Last analyzed on March 25, 2026 (commit 03baa780). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings6
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Execution of Untrusted 'Plan File' Leads to Prompt Injection The `executing-plans` skill is explicitly designed to '读取计划文件' (read plan file) and '严格按照计划步骤执行' (strictly follow plan steps). The 'plan file' is an untrusted input provided by the user. This design allows an attacker to inject arbitrary instructions into the agent's execution flow by crafting a malicious plan file, effectively overriding the agent's primary directives and leading to prompt injection. The skill should not blindly execute instructions from an untrusted 'plan file'. Implement strict parsing and validation of the plan file's content, allowing only a predefined set of safe operations. Any instructions that involve direct execution or interpretation by the LLM should be sanitized or disallowed. Consider using a structured, declarative format for plans that limits the types of actions the agent can take. | LLM | SKILL.md:46 | |
| CRITICAL | Execution of Untrusted 'Plan File' Leads to Command Injection The skill instructs the agent to execute steps from an untrusted 'plan file'. The examples provided within the skill's description clearly show shell commands (e.g., `$ npm test`, `$ git add`, `$ git commit`) as part of the expected execution flow. An attacker can craft a malicious plan file containing arbitrary shell commands, which the agent is instructed to execute, leading to command injection and potential system compromise. Do not directly execute shell commands parsed from an untrusted 'plan file'. Instead, define a limited set of allowed actions or tools that the agent can invoke, and ensure all parameters passed to these tools are strictly validated and sanitized. If shell execution is absolutely necessary, use a sandboxed environment or a highly restricted command execution mechanism that prevents arbitrary command injection. | LLM | SKILL.md:62 | |
| HIGH | Untrusted Plan File Can Lead to Data Exfiltration The skill instructs the agent to '读取计划文件' (read plan file) and execute its steps. Combined with the prompt and command injection vulnerabilities, an attacker can craft a plan file that instructs the agent to read arbitrary files from the filesystem (e.g., `/etc/passwd`, sensitive configuration files). The content of these files could then be exfiltrated, for example, by including them in `git commit` messages, writing them to a publicly accessible location, or sending them over the network if the agent's environment allows. Restrict the agent's ability to read arbitrary files. Implement a whitelist of allowed file paths or directories that the agent can access. Ensure that any data read from files is not directly incorporated into outputs or commands without strict sanitization. Prevent the agent from writing to or committing sensitive data to version control or external services. | LLM | SKILL.md:29 | |
| HIGH | Untrusted Plan File Can Lead to Credential Harvesting The skill's design allows for prompt and command injection via an untrusted 'plan file'. The skill also mentions 'API Key' as an example of an '隐含的环境假设' (implied environment assumption) to check. An attacker can leverage the injection capabilities to craft a plan that instructs the agent to read environment variables or specific credential files. The harvested credentials could then be exfiltrated using other command injection techniques (e.g., committing them to a repository, sending them over the network). Implement strict access controls to prevent the agent from reading sensitive environment variables or credential files. Ensure that the agent operates with the principle of least privilege. If credentials must be used, they should be managed securely (e.g., via a secrets manager) and never directly exposed to the agent's execution context in a way that allows exfiltration. | LLM | SKILL.md:38 | |
| HIGH | Skill Requires Broad Permissions, Amplifying Untrusted Input Risks The `executing-plans` skill inherently requires broad permissions, including filesystem access (reading plan files, modifying code, `git` operations) and shell execution capabilities (running `npm test`, `git` commands). While the skill itself doesn't define permissions, its functionality, when combined with the critical prompt and command injection vulnerabilities from the untrusted 'plan file', means that if the agent operates with excessive permissions, the impact of a successful attack is significantly amplified, allowing for wider system compromise. Run the agent in a highly restricted, sandboxed environment with the absolute minimum necessary permissions. Implement strict resource limits and network egress filtering. Avoid running the agent with root or administrative privileges. Regularly audit the permissions granted to the agent and its underlying execution environment. | LLM | SKILL.md:130 | |
| HIGH | Untrusted Plan File Can Introduce Supply Chain Risks The skill instructs the agent to execute commands from an untrusted 'plan file'. This plan could include instructions to install software packages (e.g., `npm install`, `pip install`). If an attacker injects commands to install malicious or typosquatted packages, it introduces a supply chain risk, potentially compromising the development environment or the resulting software. The skill's example of `$ npm test` implies package management is part of the expected workflow. Prevent the agent from installing arbitrary packages from untrusted sources. If package installation is required, enforce the use of trusted package registries, pinned versions, and integrity checks (e.g., hash verification). Implement a policy that disallows the installation of new packages without explicit human review and approval. | LLM | SKILL.md:62 |
Scan History
Embed Code
[](https://skillshield.io/report/2acb2cc9682758c2)
Powered by SkillShield