Trust Assessment
openbotauth received a trust score of 65/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 3 findings: 0 critical, 3 high, 0 medium, and 0 low severity. Key findings include JavaScript code performs filesystem write of private key, JavaScript code performs filesystem write of API token, Shell command for API interaction with credentials.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 55/100, indicating areas for improvement.
Last analyzed on February 12, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings3
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| HIGH | JavaScript code performs filesystem write of private key The skill provides a JavaScript code block that uses `node:fs` to generate an Ed25519 keypair and write the private key (along with public key components) to a file (`key.json`) on the agent's filesystem. If an AI agent executes this code, it performs arbitrary filesystem write operations, creating a sensitive credential (private key) in its environment. This poses a command injection risk and creates a target for subsequent data exfiltration. If the AI agent is not intended to execute this code, clearly mark it as 'for human execution only' or wrap it in a non-executable format. If agent execution is intended, ensure robust sandboxing for code execution environments and implement explicit user confirmation for sensitive filesystem operations, especially those involving private keys. | LLM | skill.md:29 | |
| HIGH | JavaScript code performs filesystem write of API token The skill provides a JavaScript code block that uses `node:fs` to create a directory and write an API token to a file (`token`) on the agent's filesystem. If an AI agent executes this code, it performs arbitrary filesystem write operations, storing a sensitive credential (API token) in its environment. This poses a command injection risk and creates a target for subsequent data exfiltration. If the AI agent is not intended to execute this code, clearly mark it as 'for human execution only' or wrap it in a non-executable format. If agent execution is intended, ensure robust sandboxing for code execution environments and implement explicit user confirmation for sensitive filesystem operations, especially those involving API tokens. | LLM | skill.md:69 | |
| HIGH | Shell command for API interaction with credentials The skill includes a `curl` command that, if executed by an AI agent, performs a shell command execution. This command makes an external network request to `https://api.openbotauth.org/agents` and sends sensitive information (an API token via `Authorization: Bearer` header and public key components in the request body). Allowing an agent to execute arbitrary shell commands from untrusted content is a significant command injection risk, especially when involving network requests with credentials. If the AI agent is not intended to execute shell commands, clearly mark it as 'for human execution only' or provide an API call via a safe, sandboxed tool rather than a raw shell command. If agent execution is intended, ensure robust sandboxing for command execution environments and implement explicit user confirmation for network requests involving credentials. | LLM | skill.md:86 |
Scan History
Embed Code
[](https://skillshield.io/report/182cf0b111ab8598)
Powered by SkillShield