Security Audit
WilsonLiu95/openclaw-skills:skills/daily-evolution
github.com/WilsonLiu95/openclaw-skillsTrust Assessment
WilsonLiu95/openclaw-skills:skills/daily-evolution received a trust score of 82/100, placing it in the Mostly Trusted category. This skill has passed most security checks with only minor considerations noted.
SkillShield's automated analysis identified 2 findings: 0 critical, 1 high, 1 medium, and 0 low severity. Key findings include Autonomous Modification of Core System Prompts, Unsafe API Key Validation Instructions.
The analysis covered 4 layers: manifest_analysis, llm_behavioral_safety, dependency_graph, static_code_analysis. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on February 8, 2026 (commit dacc554a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings2
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| HIGH | Autonomous Modification of Core System Prompts The skill explicitly instructs the agent to modify critical system files (`SOUL.md`, `TOOLS.md`) based on conversation history. This creates a persistence mechanism for prompt injection attacks. If an attacker injects a malicious instruction into the conversation history, the agent is instructed to 'consolidate' this as a best practice and write it into its own core definition files, permanently compromising the agent's behavior. Disable autonomous writing to core configuration files (SOUL.md, TOOLS.md). Store learned preferences in a separate, lower-priority memory file (e.g., `user_preferences.md`) that cannot override system safety instructions. Implement a Human-in-the-Loop (HITL) approval step for any changes to core agent files. | Unknown | SKILL.md:48 | |
| MEDIUM | Unsafe API Key Validation Instructions The skill instructs the agent to perform 'API Key validity checks' as part of the tool inventory phase. This implies the LLM must access and process raw credentials. Processing secrets within the LLM context window increases the risk of accidental leakage via hallucination, logging, or subsequent prompt injection attacks, even if instructions say not to report them. Remove instructions for the LLM to validate API keys directly. Use a dedicated, sandboxed tool function that returns a boolean status (valid/invalid) without exposing the raw key material to the LLM context. | Unknown | SKILL.md:39 |
Scan History
Embed Code
[](https://skillshield.io/report/2882bb8004349ddc)
Powered by SkillShield