Trust Assessment
pdauth received a trust score of 67/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 3 findings: 0 critical, 2 high, 1 medium, and 0 low severity. Key findings include Potential Command Injection via User-Controlled Arguments, Excessive Permissions: Broad API Access and Data Exposure, Supply Chain Risk: Unpinned Node.js Dependency.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 63/100, indicating areas for improvement.
Last analyzed on February 13, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings3
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| HIGH | Potential Command Injection via User-Controlled Arguments The skill instructs the AI agent to execute shell commands (`pdauth call`) with arguments that can be directly influenced by user input (e.g., `text`, `query`, `title`, `content`). If the `pdauth` tool does not properly sanitize these arguments before passing them to an underlying shell or API, a malicious user could inject arbitrary shell commands or manipulate API calls. For example, a user could provide input like `text='Hello $(evil_command)'` or craft malicious JSON for the `--args` parameter. The `pdauth` tool itself must rigorously sanitize all user-provided arguments to prevent shell metacharacter injection or malicious API payload construction. The AI agent should also be instructed to escape or validate user input before constructing `pdauth` commands, especially for arguments that are directly interpolated into shell commands or JSON payloads. | LLM | SKILL.md:40 | |
| HIGH | Supply Chain Risk: Unpinned Node.js Dependency The skill's manifest specifies the installation of the `pdauth` Node.js package without a pinned version (`'package': 'pdauth'`). This means that `npm install -g pdauth` will always fetch the latest available version. If a malicious actor compromises the `pdauth` package maintainer's account or the npm registry, they could publish a backdoored version of the package. Any AI agent installing or updating this skill would then automatically receive and execute the malicious code, leading to a supply chain attack. Pin the version of the `pdauth` package in the manifest (e.g., `'package': 'pdauth@1.2.3'`) to ensure deterministic and secure installations. Regularly review and update the pinned version after verifying the integrity of new releases. Consider using a package lock file if the ecosystem supports it for more robust dependency management. | LLM | Manifest | |
| MEDIUM | Excessive Permissions: Broad API Access and Data Exposure The skill grants the AI agent the ability to connect to and interact with '2500+ APIs' via Pipedream, allowing it to perform powerful actions such as sending messages, creating pages, sending emails, and managing GitHub issues. While this is the intended functionality, it represents a very broad attack surface. A compromised or prompt-injected agent could abuse this extensive access to perform unauthorized actions across numerous services. Additionally, the `pdauth status --all` command allows the agent to list all users and their connected applications, which could lead to sensitive information disclosure if misused. Implement strict access controls and monitoring for the AI agent's use of this skill. Consider limiting the scope of APIs the agent can connect to or the specific actions it can perform based on its role. For the `pdauth status --all` command, ensure that the agent is only permitted to access or summarize this information when explicitly authorized and for legitimate purposes, and that such information is not exposed to unauthorized users. | LLM | SKILL.md:100 |
Scan History
Embed Code
[](https://skillshield.io/report/29cee159cbd89adf)
Powered by SkillShield