Trust Assessment
anysite-mcp-migration received a trust score of 67/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 2 findings: 1 critical, 0 high, 1 medium, and 0 low severity. Key findings include Arbitrary File Read via User-Provided Path, Information Leakage via `discover()` with User-Inferred Parameters.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 63/100, indicating areas for improvement.
Last analyzed on April 1, 2026 (commit 5cefedb0). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings2
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Arbitrary File Read via User-Provided Path The skill explicitly instructs the host LLM to read local files from the filesystem based on a path provided directly by the user. This allows a malicious user to specify paths to sensitive system files (e.g., `/etc/passwd`, `/app/secrets.txt`, configuration files, or other skill files) and potentially exfiltrate their content. The skill does not specify any validation or sanitization of the user-provided file path, leading to a severe data exfiltration and excessive permissions vulnerability. 1. Restrict file reading to a predefined, safe directory (e.g., a temporary sandbox, or a directory explicitly designated for skill files). 2. Implement strict path validation and sanitization to prevent directory traversal attacks (e.g., `../`). 3. If reading arbitrary skill files is necessary, ensure the agent's environment is sandboxed and has minimal permissions, and that the content is processed securely. 4. Consider using a dedicated tool for file access that enforces security policies, rather than relying on the LLM's general file reading capabilities. | LLM | SKILL.md:40 | |
| MEDIUM | Information Leakage via `discover()` with User-Inferred Parameters The skill instructs the host LLM to call the `discover()` tool with `source` and `category` parameters that are inferred directly from untrusted user input (old tool names). While `discover()` is intended to list available endpoints and schemas, allowing arbitrary, user-inferred values for `source` and `category` could potentially lead to information leakage about internal API structures, undocumented services, or trigger unexpected behavior if the `discover()` tool is not robustly secured against arbitrary input. This grants excessive permission to probe the underlying API structure based on user input. 1. Implement strict validation and a whitelist for `source` and `category` parameters passed to `discover()`, ensuring only known and safe values are used. 2. If inference is necessary, ensure the inference logic is robust and cannot be manipulated to produce arbitrary or malicious `source`/`category` values. 3. Ensure the `discover()` tool itself is hardened against arbitrary input, only revealing public information and having no side effects. | LLM | SKILL.md:180 |
Scan History
Embed Code
[](https://skillshield.io/report/039c0a14f4a627a6)
Powered by SkillShield