Trust Assessment
langgraph received a trust score of 72/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 1 finding: 1 critical, 0 high, 0 medium, and 0 low severity. Key findings include Arbitrary code execution via `eval()` in calculator tool.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on February 20, 2026 (commit e36d6fd3). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings1
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Arbitrary code execution via `eval()` in calculator tool The `calculator` tool directly uses `eval()` on the `expression` argument, which is derived from user input. This allows an attacker to execute arbitrary Python code, leading to command injection, data exfiltration, or system compromise. For example, an attacker could pass `__import__('os').system('rm -rf /')` or `__import__('subprocess').run(['cat', '/etc/passwd'])`. Replace `eval()` with a safer alternative for evaluating mathematical expressions. Consider using `ast.literal_eval` for simple literals, or a dedicated, sandboxed mathematical expression parser library that does not allow arbitrary code execution. Ensure all inputs are strictly validated and sanitized before processing. | LLM | SKILL.md:28 |
Scan History
Embed Code
[](https://skillshield.io/report/fb803ed447aff837)
Powered by SkillShield