Security Audit
lygo-champion-sancora-unified-minds
github.com/openclaw/skillsTrust Assessment
lygo-champion-sancora-unified-minds received a trust score of 87/100, placing it in the Mostly Trusted category. This skill has passed most security checks with only minor considerations noted.
SkillShield's automated analysis identified 2 findings: 0 critical, 0 high, 2 medium, and 0 low severity. Key findings include Unsafe deserialization / dynamic eval, Prompt Injection: Instruction to display file content.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on February 13, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings2
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| MEDIUM | Unsafe deserialization / dynamic eval Decryption followed by code execution Remove obfuscated code execution patterns. Legitimate code does not need base64-encoded payloads executed via eval, encrypted-then-executed blobs, or dynamic attribute resolution to call system functions. | Manifest | skills/deepseekoracle/lygo-champion-sancora-unified-minds/scripts/self_check.py:6 | |
| MEDIUM | Prompt Injection: Instruction to display file content The `SKILL.md` file, which is treated as untrusted input, contains a direct instruction for the host LLM to perform an action: 'show hash from `references/canon.json`'. This constitutes a prompt injection attempt, as untrusted content is trying to manipulate the LLM's behavior by instructing it to read and display data from a local file. While the requested data (a hash) is likely intended to be public for verification, the pattern of embedding instructions within untrusted skill descriptions is a security risk. The presence of `scripts/show_hash.py` further indicates an intended mechanism to fulfill this instruction, highlighting a potential bypass of the untrusted content boundary. Remove direct instructions to the LLM from `SKILL.md` within the untrusted block. If displaying the hash is a core function, it should be invoked explicitly via a clearly defined tool call that the LLM chooses to execute, rather than an instruction embedded in the skill's description. The LLM should strictly adhere to treating all content within the untrusted delimiters as data, not commands. | LLM | SKILL.md:25 |
Scan History
Embed Code
[](https://skillshield.io/report/8c62465dcc03513a)
Powered by SkillShield