Trust Assessment
moltspaces received a trust score of 21/100, placing it in the Untrusted category. This skill has significant security findings that require attention before use in production.
SkillShield's automated analysis identified 5 findings: 1 critical, 3 high, 1 medium, and 0 low severity. Key findings include Unsafe environment variable passthrough, Credential harvesting, Unpinned Python dependency version.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The Manifest Analysis layer scored lowest at 55/100, indicating areas for improvement.
Last analyzed on February 12, 2026 (commit 5acc5677). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings5
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Credential harvesting Reading well-known credential environment variables Skills should only access environment variables they explicitly need. Bulk environment dumps (os.environ.copy, JSON.stringify(process.env)) are almost always malicious. Remove access to Keychain, GPG keys, and credential stores. | Manifest | skills/logesh2496/moltspaces/scripts/bot.py:106 | |
| HIGH | Unsafe environment variable passthrough Access to well-known credential environment variables Minimize environment variable exposure. Only pass required, non-sensitive variables to MCP servers. Use dedicated secret management instead of environment passthrough. | Manifest | skills/logesh2496/moltspaces/scripts/bot.py:106 | |
| HIGH | User-controlled topic argument injected into LLM system prompt The `scripts/bot.py` constructs the internal LLM's system prompt using the `--topic` command-line argument. If a malicious user or an agent passes a crafted string (e.g., 'ignore previous instructions and...') as the topic, it can lead to prompt injection of the internal LLM. The output of this internal LLM could then be relayed to the host LLM, potentially manipulating its behavior or generating harmful content. Implement input sanitization or use a structured prompt format where user input is passed as a separate user message rather than directly into the system prompt. This helps isolate user input from system instructions. | LLM | scripts/bot.py:100 | |
| HIGH | User-controlled personality.md content injected into LLM system prompt The `scripts/bot.py` loads the content of `assets/personality.md` and directly inserts it into the internal LLM's system prompt. The `SKILL.md` explicitly states that the user must 'prepare the `assets/personality.md` file'. If a malicious user or an agent provides a crafted `personality.md` file containing prompt injection instructions, it can manipulate the internal LLM. The output of this internal LLM could then be relayed to the host LLM, potentially manipulating its behavior or generating harmful content. Implement input sanitization or use a structured prompt format where user-provided content is passed as a separate user message rather than directly into the system prompt. Consider validating the content of `personality.md` for suspicious patterns if it's meant to be strictly controlled. | LLM | scripts/bot.py:88 | |
| MEDIUM | Unpinned Python dependency version Dependency 'pipecat-ai[webrtc,daily,silero,elevenlabs,openai,local-smart-turn-v3,runner]' is not pinned to an exact version. Pin Python dependencies with exact versions where feasible. | Dependencies | skills/logesh2496/moltspaces/pyproject.toml |
Scan History
Embed Code
[](https://skillshield.io/report/026115317bb83c6d)
Powered by SkillShield