Trust Assessment
data-cache received a trust score of 10/100, placing it in the Untrusted category. This skill has significant security findings that require attention before use in production.
SkillShield's automated analysis identified 10 findings: 5 critical, 4 high, 1 medium, and 0 low severity. Key findings include Arbitrary command execution, Unsafe deserialization / dynamic eval, Dangerous call: exec().
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 10/100, indicating areas for improvement.
Last analyzed on February 14, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings10
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Arbitrary command execution Python dynamic code execution (exec/eval/compile) Review all shell execution calls. Ensure commands are static (not built from user input), use absolute paths, and are strictly necessary. Prefer library APIs over shell commands. | Manifest | skills/dgriffin831/skill-scan/test-fixtures/evasive-09-python-pickle/cache.py:23 | |
| CRITICAL | Arbitrary command execution Dynamic import of dangerous module Review all shell execution calls. Ensure commands are static (not built from user input), use absolute paths, and are strictly necessary. Prefer library APIs over shell commands. | Manifest | skills/dgriffin831/skill-scan/test-fixtures/evasive-09-python-pickle/cache.py:26 | |
| CRITICAL | Dangerous call: exec() Call to 'exec()' detected in function '<module>'. This can execute arbitrary code. Avoid using dangerous functions like exec/eval/os.system. Use safer alternatives. | Static | skills/dgriffin831/skill-scan/test-fixtures/evasive-09-python-pickle/cache.py:23 | |
| CRITICAL | Obfuscated code exfiltrates API key and executes commands A base64 encoded string is decoded and executed via `exec()`. The decoded payload imports `os` and `subprocess`, then uses `subprocess.run` to call `curl` to exfiltrate the `ANTHROPIC_API_KEY` environment variable to an external malicious URL (`https://evil.com/c`). This represents a severe compromise of data, credentials, and system integrity through hidden instructions and command injection. Remove the base64 encoded string and the `exec` call. Never execute obfuscated or untrusted code. Review all code for similar hidden instructions and ensure no sensitive environment variables are accessed or exfiltrated. | LLM | cache.py:18 | |
| CRITICAL | Direct command execution via `subprocess.call` The skill directly imports and uses `subprocess.call` to execute arbitrary commands (`whoami` in this case). This allows for arbitrary command injection if any part of the command array is derived from untrusted input, leading to potential system compromise. Avoid direct use of `subprocess.call` with hardcoded or untrusted commands. If external processes must be invoked, use safer alternatives like `subprocess.run` with `check=True` and carefully sanitize all inputs, ensuring commands are fully specified and not user-controlled. | LLM | cache.py:26 | |
| HIGH | Unsafe deserialization / dynamic eval Python builtins/import manipulation Remove obfuscated code execution patterns. Legitimate code does not need base64-encoded payloads executed via eval, encrypted-then-executed blobs, or dynamic attribute resolution to call system functions. | Manifest | skills/dgriffin831/skill-scan/test-fixtures/evasive-09-python-pickle/cache.py:26 | |
| HIGH | Dangerous call: __import__() Call to '__import__()' detected in function '<module>'. This can execute arbitrary code. Avoid using dangerous functions like exec/eval/os.system. Use safer alternatives. | Static | skills/dgriffin831/skill-scan/test-fixtures/evasive-09-python-pickle/cache.py:26 | |
| HIGH | Unsafe deserialization using `pickle.load` The `load_cache` function uses `pickle.load` to deserialize data from a file. `pickle.load` is known to be insecure against maliciously constructed data, as it can execute arbitrary code during deserialization. The code itself acknowledges this risk in a comment, indicating a clear command injection vulnerability if untrusted data is loaded. Do not use `pickle` for deserializing data from untrusted sources. Consider safer serialization formats like JSON, YAML, or Protocol Buffers. If `pickle` must be used, ensure the source of the pickled data is absolutely trusted and integrity-checked. | LLM | cache.py:14 | |
| HIGH | Dynamic execution using `getattr(builtins, 'exec')` The skill retrieves the `exec` function dynamically using `getattr(builtins, 'exec')` and then uses it to execute a string. While the current executed string is benign (`print("cached")`), this technique is often used to bypass static analysis and allows for arbitrary code execution if the executed string is derived from untrusted input, posing a significant command injection risk. Avoid dynamic retrieval and execution of built-in functions like `exec` or `eval`. If dynamic code execution is strictly necessary, ensure the source of the code is fully trusted and validated, and consider using a sandboxed environment. | LLM | cache.py:30 | |
| MEDIUM | Dangerous call: compile() Call to 'compile()' detected in function '<module>'. This can execute arbitrary code. Avoid using dangerous functions like exec/eval/os.system. Use safer alternatives. | Static | skills/dgriffin831/skill-scan/test-fixtures/evasive-09-python-pickle/cache.py:23 |
Scan History
Embed Code
[](https://skillshield.io/report/b8431980f624258f)
Powered by SkillShield