Security Audit
Automattic/agent-skills:skills/wp-playground
github.com/Automattic/agent-skillsTrust Assessment
Automattic/agent-skills:skills/wp-playground received a trust score of 65/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 4 findings: 1 critical, 1 high, 2 medium, and 0 low severity. Key findings include Unsanitized CLI arguments allow command injection, Skill allows mounting arbitrary local paths, risking data exfiltration, Unpinned `@wp-playground/cli@latest` dependency introduces supply chain risk.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 41/100, indicating areas for improvement.
Last analyzed on April 1, 2026 (commit 48d4aa21). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings4
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Unsanitized CLI arguments allow command injection The skill instructs the agent to execute `npx @wp-playground/cli` with various arguments (e.g., `--wp=<version>`, `--php=<version>`, `--port=<free-port>`). These arguments are derived from user input. If the agent does not sanitize these inputs before passing them to the shell command, a malicious user could inject arbitrary shell commands by terminating the argument and appending new commands (e.g., `--wp=6.0 --php=8.0; rm -rf /`). This is a direct command injection vulnerability. The agent must strictly sanitize all user-provided values passed as arguments to `npx` commands. This typically involves escaping shell metacharacters or using a command execution library that handles argument separation safely, preventing arbitrary command execution. | LLM | SKILL.md:39 | |
| HIGH | Skill allows mounting arbitrary local paths, risking data exfiltration The skill explicitly supports mounting arbitrary local host paths into the ephemeral Playground instance using `--auto-mount` or `--mount=/host/path:/vfs/path`. The skill's guardrails even warn, 'If mounting local code, ensure it is clean of secrets; Playground copies files into an in-memory FS.' This functionality, while intended for development, creates a high risk of data exfiltration. If the agent is instructed to mount a directory containing sensitive files (e.g., `~/.ssh`, API keys, configuration files), these files become accessible to any code (e.g., a malicious plugin or blueprint) running within the Playground environment. Such code could then read and potentially exfiltrate these secrets (e.g., by including them in a snapshot or making network requests). The agent should implement strict policies on which host paths can be mounted. Ideally, only explicitly whitelisted, non-sensitive directories should be allowed. If user-provided paths are necessary, they should be validated to ensure they do not point to sensitive system directories or user home directories. | LLM | SKILL.md:30 | |
| MEDIUM | Unpinned `@wp-playground/cli@latest` dependency introduces supply chain risk The skill consistently instructs the agent to use `npx @wp-playground/cli@latest`. Relying on the `@latest` tag means that the exact version of the `@wp-playground/cli` package is not pinned. If a malicious actor were to compromise the npm registry or the package maintainer's account and publish a compromised version of `@wp-playground/cli` as `latest`, the agent would automatically download and execute this malicious code. This could lead to arbitrary code execution on the host machine. The skill should specify a pinned version of the `@wp-playground/cli` package (e.g., `npx @wp-playground/cli@1.2.3`) to ensure deterministic and secure execution. The agent should also be configured to warn or refuse to execute commands with unpinned dependencies. | LLM | SKILL.md:39 | |
| MEDIUM | Blueprints can read adjacent files, risking data exfiltration The skill mentions the `--blueprint-may-read-adjacent-files` flag, which explicitly allows a blueprint to read files located in the same directory as the blueprint itself. If a malicious blueprint is provided by a user and placed in a sensitive directory (e.g., alongside configuration files or other secrets), this flag enables the blueprint to read those sensitive files. This is a targeted data exfiltration vector, especially when combined with the ability to provide blueprints from local files or URLs. The agent should carefully evaluate the necessity of using `--blueprint-may-read-adjacent-files` when executing user-provided blueprints. If used, the blueprint should be placed in an isolated, non-sensitive directory, or its content should be thoroughly vetted for malicious file access patterns. | LLM | SKILL.md:55 |
Scan History
Embed Code
[](https://skillshield.io/report/aac4c8a525133ee4)
Powered by SkillShield