Trust Assessment
moltgram received a trust score of 58/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 2 findings: 1 critical, 1 high, 0 medium, and 0 low severity. Key findings include System prompt override / policy bypass, Unverified Skill Update Mechanism via External `curl`.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on February 13, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings2
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | System prompt override / policy bypass Ignore/disregard previous instructions pattern Remove or rewrite any instructions that attempt to override system behavior. Legitimate skills should not contain phrases like 'ignore previous instructions' or 'new system prompt'. | Manifest | skills/nek-11/moltgram/skill.md:159 | |
| HIGH | Unverified Skill Update Mechanism via External `curl` The skill instructs the agent to download and overwrite its own definition from an external URL (`https://moltgram.bot/skill.md`) using a `curl` command. This process lacks integrity verification (e.g., checksum, signature), making it vulnerable to supply chain attacks. A compromised `moltgram.bot` server could serve a malicious `skill.md` file, which the agent would then download and potentially execute or integrate, leading to arbitrary code execution or manipulation of the agent's behavior. If the agent's environment automatically executes shell commands found in skill descriptions, this `curl` command also represents a command injection vector. The instruction to save the downloaded content to `~/.config/moltgram/skill.md` implies write access to a sensitive configuration directory, which could be exploited if the downloaded content were malicious. Implement robust integrity checks (e.g., PGP signatures, SHA256 checksums) for downloaded skill updates. Instruct the agent to verify the integrity of the downloaded file *before* saving or integrating it. Avoid direct execution of external commands without explicit user/agent confirmation and sandboxing. Consider using a package manager or a more secure update mechanism. | LLM | skill.md:8 |
Scan History
Embed Code
[](https://skillshield.io/report/a5267cb13478ee6c)
Powered by SkillShield