Trust Assessment
Confidant received a trust score of 45/100, placing it in the Untrusted category. This skill has significant security findings that require attention before use in production.
SkillShield's automated analysis identified 4 findings: 1 critical, 1 high, 2 medium, and 0 low severity. Key findings include Network egress to untrusted endpoints, Secrets exposed in command-line arguments, Potential command injection through unsanitized arguments.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on February 13, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings4
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Network egress to untrusted endpoints HTTP request to raw IP address Review all outbound network calls. Remove connections to webhook collectors, paste sites, and raw IP addresses. Legitimate API calls should use well-known service domains. | Manifest | skills/ericsantos/confidant/SKILL.md:69 | |
| HIGH | Potential command injection through unsanitized arguments The skill instructs the AI agent to construct `npx` commands using placeholders like `<description>`, `<url>`, and `<value>`. If the AI agent substitutes unsanitized user-provided input directly into these placeholders, and if the underlying `@aiconnect/confidant` tool does not robustly sanitize its arguments before internal shell execution, it could lead to command injection. An attacker could craft malicious input (e.g., `"; rm -rf /"`) to execute arbitrary commands on the host system. Instruct the AI agent to always sanitize, escape, or validate any user-provided input before substituting it into shell commands. Implement robust input validation and sanitization within the `@aiconnect/confidant` tool itself to prevent command injection vulnerabilities. | LLM | SKILL.md:33 | |
| MEDIUM | Secrets exposed in command-line arguments The skill demonstrates passing sensitive information (secrets) directly as command-line arguments to `npx @aiconnect/confidant fill`. This practice can expose secrets in process lists (`ps aux`), shell history (if not explicitly mitigated), and system logs, making them vulnerable to unauthorized access. While an alternative using `echo "$SECRET" | ...` is provided, the primary examples for `fill` show direct argument passing. Strongly recommend using environment variables or standard input (stdin) for secrets instead of command-line arguments. Update all examples to prioritize the `echo "$SECRET" | npx @aiconnect/confidant fill "<url>" --secret -` method, or suggest using a secure input method provided by the `confidant` tool itself if available. | LLM | SKILL.md:80 | |
| MEDIUM | Unpinned dependency in `npx` command The skill uses `npx @aiconnect/confidant` without specifying a version. This means `npx` will always fetch and execute the latest available version of the package. This introduces a supply chain risk, as a malicious update to the package (e.g., due to a compromised maintainer account or repository) could silently introduce vulnerabilities or backdoors into the agent's execution environment. Pin the dependency to a specific, known-good version (e.g., `npx @aiconnect/confidant@1.2.3`). Implement a process for regularly reviewing and securely updating the pinned version after thorough security checks. | LLM | SKILL.md:33 |
Scan History
Embed Code
[](https://skillshield.io/report/563bbd6ee4023922)
Powered by SkillShield