Trust Assessment
safe-calculator received a trust score of 10/100, placing it in the Untrusted category. This skill has significant security findings that require attention before use in production.
SkillShield's automated analysis identified 9 findings: 9 critical, 0 high, 0 medium, and 0 low severity. Key findings include Arbitrary command execution, Dangerous call: eval(), Dangerous call: exec().
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The Manifest Analysis layer scored lowest at 0/100, indicating areas for improvement.
Last analyzed on February 13, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings9
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Arbitrary command execution Python dynamic code execution (exec/eval/compile) Review all shell execution calls. Ensure commands are static (not built from user input), use absolute paths, and are strictly necessary. Prefer library APIs over shell commands. | Manifest | skills/dgriffin831/skill-scan/test-fixtures/command-injection-eval/calculate.py:18 | |
| CRITICAL | Arbitrary command execution Python dynamic code execution (exec/eval/compile) Review all shell execution calls. Ensure commands are static (not built from user input), use absolute paths, and are strictly necessary. Prefer library APIs over shell commands. | Manifest | skills/dgriffin831/skill-scan/test-fixtures/command-injection-eval/calculate.py:24 | |
| CRITICAL | Arbitrary command execution Python dynamic code execution (exec/eval/compile) Review all shell execution calls. Ensure commands are static (not built from user input), use absolute paths, and are strictly necessary. Prefer library APIs over shell commands. | Manifest | skills/dgriffin831/skill-scan/test-fixtures/command-injection-eval/calculate.py:25 | |
| CRITICAL | Arbitrary command execution Python dynamic code execution (exec/eval/compile) Review all shell execution calls. Ensure commands are static (not built from user input), use absolute paths, and are strictly necessary. Prefer library APIs over shell commands. | Manifest | skills/dgriffin831/skill-scan/test-fixtures/command-injection-eval/calculate.py:31 | |
| CRITICAL | Arbitrary command execution Python dynamic code execution (exec/eval/compile) Review all shell execution calls. Ensure commands are static (not built from user input), use absolute paths, and are strictly necessary. Prefer library APIs over shell commands. | Manifest | skills/dgriffin831/skill-scan/test-fixtures/command-injection-eval/calculate.py:32 | |
| CRITICAL | Dangerous call: eval() Call to 'eval()' detected in function 'calculate'. This can execute arbitrary code. Avoid using dangerous functions like exec/eval/os.system. Use safer alternatives. | Static | skills/dgriffin831/skill-scan/test-fixtures/command-injection-eval/calculate.py:25 | |
| CRITICAL | Dangerous call: exec() Call to 'exec()' detected in function 'process_formula'. This can execute arbitrary code. Avoid using dangerous functions like exec/eval/os.system. Use safer alternatives. | Static | skills/dgriffin831/skill-scan/test-fixtures/command-injection-eval/calculate.py:32 | |
| CRITICAL | Command Injection via eval() The `calculate` function uses `eval()` directly on the `expression` parameter. This allows an attacker to execute arbitrary Python code by providing a malicious string as input, leading to full command injection and potential compromise of the host system or data. Never use `eval()` with untrusted input. For mathematical expressions, use a dedicated, secure mathematical parser library (e.g., `sympy.sympify` with appropriate safeguards, or a custom parser) that does not allow arbitrary code execution. If only literal Python structures are needed, use `ast.literal_eval`. | LLM | calculate.py:21 | |
| CRITICAL | Command Injection via exec() The `process_formula` function uses `exec()` with an f-string that incorporates the `formula` and `variables` parameters directly. This allows an attacker to execute arbitrary Python code by injecting malicious strings into these parameters, leading to full command injection and potential compromise of the host system or data. Never use `exec()` with untrusted input. Re-architect the functionality to avoid executing arbitrary code. If dynamic code execution is absolutely necessary, implement strict sandboxing and input validation, or use a safer, more controlled mechanism. | LLM | calculate.py:27 |
Scan History
Embed Code
[](https://skillshield.io/report/2122af5872e3ee96)
Powered by SkillShield