Trust Assessment
dashlane received a trust score of 65/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 3 findings: 1 critical, 1 high, 1 medium, and 0 low severity. Key findings include Potential Command Injection via `dcli exec` and argument interpolation, Broad Secret Exposure via `dcli exec` and `dcli inject`, Direct Output of Sensitive Data to Console/JSON.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 48/100, indicating areas for improvement.
Last analyzed on February 13, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings3
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Potential Command Injection via `dcli exec` and argument interpolation The skill documentation demonstrates the use of `dcli exec -- mycommand` which allows injecting Dashlane secrets into the environment of an arbitrary command. If `mycommand` is derived from untrusted user input without proper sanitization, it can lead to arbitrary command execution. Additionally, other `dcli` commands take arguments (e.g., `dcli p mywebsite`). If these arguments are directly interpolated from untrusted input, shell metacharacters could be used to inject and execute arbitrary commands. When constructing `dcli` commands, especially `dcli exec`, ensure that any user-provided input is strictly validated and properly escaped for shell execution. Consider using a safe command execution library or explicitly whitelisting allowed commands/arguments. | LLM | SKILL.md:136 | |
| HIGH | Broad Secret Exposure via `dcli exec` and `dcli inject` The skill describes `dcli exec -- mycommand` and `dcli inject < template.txt > output.txt`. These commands allow injecting sensitive Dashlane secrets (passwords, notes, OTPs) into the environment variables of any specified command or directly into arbitrary files. This grants excessive permissions to potentially untrusted commands or allows writing sensitive data to insecure locations, significantly increasing the risk of data exfiltration if the skill is prompted to use these features with untrusted input for the command or file path. Implement strict controls and user confirmations before allowing the skill to use `dcli exec` or `dcli inject` with user-provided commands or file paths. Ensure that the target command or file path is explicitly whitelisted or heavily sanitized. Limit the scope of secrets injected to only what is absolutely necessary. | LLM | SKILL.md:136 | |
| MEDIUM | Direct Output of Sensitive Data to Console/JSON The skill documentation shows commands like `dcli p mywebsite -o console` and `dcli p mywebsite -o json` which directly output sensitive information (passwords, secure notes, secrets) to standard output or as JSON. If an LLM is prompted to execute these commands and then includes the raw output in its response or logs, it could lead to the unintended exposure of credentials and other sensitive data. When processing output from `dcli` commands, especially those that print to console or JSON, ensure that sensitive fields are redacted or handled securely before being presented to the user or stored. Avoid including raw sensitive output in LLM responses without explicit user consent and secure channels. | LLM | SKILL.md:50 |
Scan History
Embed Code
[](https://skillshield.io/report/60b3a3d567c1d65c)
Powered by SkillShield