Trust Assessment
mrc-monitor received a trust score of 10/100, placing it in the Untrusted category. This skill has significant security findings that require attention before use in production.
SkillShield's automated analysis identified 7 findings: 2 critical, 1 high, 4 medium, and 0 low severity. Key findings include Arbitrary command execution, Potential hardcoded secret (high entropy), Dangerous call: subprocess.run().
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 63/100, indicating areas for improvement.
Last analyzed on February 13, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings7
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Arbitrary command execution Python shell execution (os.system, subprocess) Review all shell execution calls. Ensure commands are static (not built from user input), use absolute paths, and are strictly necessary. Prefer library APIs over shell commands. | Manifest | skills/wysh3/mrc-monitor/scripts/monitor.py:89 | |
| CRITICAL | Untrusted content contains instructions for host LLM The `SKILL.md` file, which is explicitly marked as untrusted input, contains direct instructions for the host LLM on how to process user input, execute background scripts, and formulate specific responses. For example, it dictates 'Extract all token numbers from the message', 'Start the background monitor script', and provides a verbatim response template. This constitutes a prompt injection attempt, as the LLM is being manipulated by untrusted data to perform specific operational actions rather than interpreting the content as descriptive information about the skill. The host LLM should treat all content within the untrusted input delimiters as descriptive text about the skill's functionality, not as direct instructions. All operational logic, such as parsing user input, executing subprocesses, and generating responses, must be implemented in trusted code outside the scope of untrusted skill descriptions. The LLM should not directly follow instructions or generate responses based on templates found in untrusted skill documentation. | LLM | SKILL.md:1 | |
| HIGH | Dangerous call: subprocess.run() Call to 'subprocess.run()' detected in function 'send_notification'. This can execute arbitrary code. Avoid using dangerous functions like exec/eval/os.system. Use safer alternatives. | Static | skills/wysh3/mrc-monitor/scripts/monitor.py:89 | |
| MEDIUM | Potential hardcoded secret (high entropy) A high-entropy string (entropy=4.89) was found in a credential-like context. Verify this is not a hardcoded secret. Use environment variables for sensitive values. | Static | skills/wysh3/mrc-monitor/scripts/monitor.py:16 | |
| MEDIUM | Suspicious import: requests Import of 'requests' detected. This module provides network or low-level system access. Verify this import is necessary. Network and system modules in skill code may indicate data exfiltration. | Static | skills/wysh3/mrc-monitor/scripts/monitor.py:101 | |
| MEDIUM | Suspicious import: requests Import of 'requests' detected. This module provides network or low-level system access. Verify this import is necessary. Network and system modules in skill code may indicate data exfiltration. | Static | skills/wysh3/mrc-monitor/scripts/monitor.py:127 | |
| MEDIUM | Hardcoded Firebase API Key A Firebase API key (`FIREBASE_API_KEY`) is hardcoded directly into the `scripts/monitor.py` file. Exposing API keys in source code is a security risk, as it can lead to unauthorized access or misuse if the code is compromised or publicly exposed. While this might be a client-side key, it is a best practice to manage all secrets securely using environment variables or a dedicated secret management system to prevent accidental exposure and facilitate rotation. Remove the hardcoded API key from the source code. Store the API key in a secure environment variable, a secret management service, or a configuration file that is not committed to version control. Retrieve the key at runtime from the secure source. | LLM | scripts/monitor.py:15 |
Scan History
Embed Code
[](https://skillshield.io/report/294bd85af07cb269)
Powered by SkillShield