Trust Assessment
oneshot received a trust score of 20/100, placing it in the Untrusted category. This skill has significant security findings that require attention before use in production.
SkillShield's automated analysis identified 6 findings: 2 critical, 3 high, 1 medium, and 0 low severity. Key findings include File read + network send exfiltration, Sensitive path access: AI agent config, Universal Tool Call with Untrusted Input Risk.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 33/100, indicating areas for improvement.
Last analyzed on February 12, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings6
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | File read + network send exfiltration AI agent config/credential file access Remove access to sensitive files not required by the skill's stated purpose. SSH keys, cloud credentials, and browser data should never be read by skills unless explicitly part of their declared functionality. | Manifest | skills/tormine/oneshot-agent/SKILL.md:198 | |
| CRITICAL | Universal Tool Call with Untrusted Input Risk The `agent.tool` method is described as a 'Universal tool call' that can dynamically invoke any other method of the OneShot SDK. If the `tool` name and its `args` are derived from untrusted LLM output, an attacker could manipulate the agent to execute arbitrary powerful functions (e.g., sending emails, making calls, buying products, building websites) with financial implications, bypassing specific tool access controls. Implement strict validation and allow-listing for the `tool` name and `args` passed to `agent.tool` when these inputs originate from an untrusted source (e.g., LLM output). Ensure the LLM cannot freely choose any tool or arbitrary arguments. Consider removing or restricting the 'universal tool call' if fine-grained control is paramount. | LLM | SKILL.md:190 | |
| HIGH | Sensitive path access: AI agent config Access to AI agent config path detected: '~/.claude/'. This may indicate credential theft. Verify that access to this sensitive path is justified and declared. | Static | skills/tormine/oneshot-agent/SKILL.md:198 | |
| HIGH | Unpinned Dependencies in Installation Instructions The installation instructions `npm install @oneshot-agent/sdk` and `npm install -g @oneshot-agent/mcp-server`, as well as the `npx -y @oneshot-agent/mcp-server` command, do not specify package versions. This means the latest version will always be installed. This practice exposes the agent environment to supply chain attacks, where a malicious update to the `@oneshot-agent/sdk` or `@oneshot-agent/mcp-server` package could compromise the system. Always pin dependencies to specific versions (e.g., `npm install @oneshot-agent/sdk@1.1.0`). For `npx`, specify the version explicitly (e.g., `npx @oneshot-agent/mcp-server@1.1.0`). Regularly review and update pinned versions after verifying their integrity. | LLM | SKILL.md:24 | |
| HIGH | Potential Data Exfiltration via Email Attachments The `agent.email` method allows sending attachments with arbitrary `base64String` content. If an attacker can control the `content` of an attachment (e.g., through prompt injection influencing the LLM's output), they could encode and exfiltrate sensitive files or data from the agent's execution environment via email. Implement strict content filtering and validation for attachment content, especially when it originates from untrusted sources. Consider restricting attachment types or requiring explicit user confirmation for sending attachments with sensitive content. Ensure the LLM cannot generate arbitrary base64 strings for attachments. | LLM | SKILL.md:45 | |
| MEDIUM | PII Gathering Capabilities with Untrusted Input Risk The skill provides methods like `findEmail`, `verifyEmail`, `enrichProfile`, and `peopleSearch` that are designed to gather and process Personally Identifiable Information (PII). If an attacker can control the inputs to these methods (e.g., names, companies, LinkedIn URLs, job titles), they could leverage the agent to perform reconnaissance, gather sensitive data about individuals, or verify the existence of certain PII, potentially violating privacy or aiding social engineering attacks. Implement strict input validation and sanitization for all PII-related methods. Ensure that the LLM's access to these tools is carefully controlled and that any PII inputs are either explicitly confirmed by a human or derived from trusted, pre-approved sources. Consider rate-limiting or access controls for these sensitive operations. | LLM | SKILL.md:100 |
Scan History
Embed Code
[](https://skillshield.io/report/12de8f4af817fe33)
Powered by SkillShield