Trust Assessment
remove-analytics received a trust score of 65/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 2 findings: 1 critical, 1 high, 0 medium, and 0 low severity. Key findings include Direct shell command execution via `npm uninstall`, Potential exposure of environment variable names/values.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 55/100, indicating areas for improvement.
Last analyzed on February 13, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings2
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Direct shell command execution via `npm uninstall` The skill explicitly instructs the LLM to execute shell commands, specifically `npm uninstall`. If the package name for uninstallation is derived from untrusted sources (e.g., a `package.json` file that could be manipulated by an attacker), this could lead to arbitrary command injection and execution on the host system. Avoid direct execution of shell commands based on potentially untrusted input. If shell execution is unavoidable, strictly sanitize and validate all arguments, or use safer APIs that do not involve direct shell invocation. Consider using a sandboxed environment for such operations. | LLM | SKILL.md:30 | |
| HIGH | Potential exposure of environment variable names/values The skill instructs the LLM to "Search for" specific environment variables in files like `.env.example` and then to "Provide a summary of: ... Environment variables removed". While `.env.example` typically contains example values, it can still contain sensitive variable names or even placeholder secrets. If the LLM includes the names or values of these variables in its summary or internal processing logs, it could lead to unintended data exposure or exfiltration. Instruct the LLM to only report the *count* of variables removed, or to redact specific variable names/values from any output or summary. Ensure that the LLM's environment prevents it from accessing actual `.env` files, only `.env.example`. Clarify that only *example* variables should be processed. | LLM | SKILL.md:22 |
Scan History
Embed Code
[](https://skillshield.io/report/7504f9f9fda9a4c9)
Powered by SkillShield