Trust Assessment
denario received a trust score of 63/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 3 findings: 1 critical, 0 high, 1 medium, and 1 low severity. Key findings include Network egress to untrusted endpoints, Covert behavior / concealment directives, Potential for Command Injection via User-Provided Research Inputs.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on February 12, 2026 (commit 458b1186). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings3
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Potential for Command Injection via User-Provided Research Inputs The `denario` skill is explicitly designed as a 'multiagent AI system' to 'automate scientific research workflows' including 'executing computational experiments' and 'performing computations'. It allows users to define research context via `set_data_description` (which can specify 'Tools: pandas, sklearn, matplotlib') and provide methodologies via `set_method` (which can accept a path to a custom markdown file). The `get_results()` method then 'runs the methodology, performs computations'. If the content provided by the user in these inputs (data descriptions, methodologies, or external files) is not rigorously sanitized, validated, or executed within a secure, isolated sandbox, an attacker could inject arbitrary code. This dynamic code execution capability, driven by user input and LLM generation, presents a critical command injection vulnerability. Implement robust input validation and sanitization for all user-provided research inputs (data descriptions, ideas, methodologies, results). Crucially, execute all computational experiments and methodology steps within a strictly sandboxed environment (e.g., isolated Docker containers, secure Python execution environments with restricted capabilities) to prevent arbitrary code execution on the host system. Ensure that any LLM-generated code is also subject to the same rigorous validation and sandboxing before execution. | LLM | SKILL.md:66 | |
| MEDIUM | Network egress to untrusted endpoints HTTP request to raw IP address Review all outbound network calls. Remove connections to webhook collectors, paste sites, and raw IP addresses. Legitimate API calls should use well-known service domains. | Manifest | cli-tool/components/mcps/devtools/figma-dev-mode.json:4 | |
| LOW | Covert behavior / concealment directives Multiple zero-width characters (stealth text) Remove hidden instructions, zero-width characters, and bidirectional overrides. Skill instructions should be fully visible and transparent to users. | Manifest | cli-tool/components/mcps/devtools/jfrog.json:4 |
Scan History
Embed Code
[](https://skillshield.io/report/618b38ab52eec791)
Powered by SkillShield