Trust Assessment
runstr-fitness received a trust score of 65/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 3 findings: 1 critical, 1 high, 1 medium, and 0 low severity. Key findings include Direct handling of Nostr private key (nsec), Potential command injection via user-provided nsec and fetched content, Unpinned dependency for `nak` installation.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 48/100, indicating areas for improvement.
Last analyzed on February 14, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings3
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Direct handling of Nostr private key (nsec) The skill explicitly instructs the AI to ask the user for their Nostr private key (nsec) and then directly use it in shell commands for decoding and decryption. While the skill states the nsec is 'never stored, logged, or transmitted,' the act of handling such a sensitive credential directly by the AI agent poses a critical security risk. A compromised AI environment or a flaw in the AI's handling logic could lead to the exfiltration of the user's private key, granting an attacker full control over their Nostr identity and data. Avoid direct handling of private keys by the AI. Instead, delegate cryptographic operations to a secure, isolated service or a client-side component where the private key never leaves the user's device. If direct handling is unavoidable, implement robust input validation, strict sandboxing, and ensure the key is immediately purged from memory after use. Consider using a secure enclave or hardware security module if available. | LLM | SKILL.md:40 | |
| HIGH | Potential command injection via user-provided nsec and fetched content The skill constructs and executes shell commands using values derived from user input (nsec) and fetched data (encrypted content). Specifically, the `nak decode nsec1...`, `nak encrypt --sec $hex_sk ...`, and `node /tmp/decrypt-runstr.mjs <hex_sk> '<content>'` commands are vulnerable. If a malicious user can craft an `nsec` or manipulate the fetched `content` to include shell metacharacters (e.g., `;`, `|`, `$(...)`), arbitrary commands could be injected and executed on the AI's host system. The direct `echo "$content" | ...` pattern is particularly susceptible. Implement strict input validation and sanitization for all user-provided and external data before using it in shell commands. Prefer using library functions or APIs that handle arguments safely (e.g., `subprocess.run` with `shell=False` and passing arguments as a list) instead of constructing shell strings directly. If shell execution is necessary, ensure all variables are properly quoted (e.g., `"${VAR}"`) to prevent shell expansion. | LLM | SKILL.md:60 | |
| MEDIUM | Unpinned dependency for `nak` installation The skill instructs the AI to install the `nak` tool using `go install github.com/fiatjaf/nak@latest`. Using `@latest` means the installation will always fetch the most recent version of the `nak` tool. This introduces a supply chain risk because if the `github.com/fiatjaf/nak` repository is compromised, or if a malicious change is introduced to its `main` branch or `latest` tag, the AI agent would install and execute potentially malicious code without explicit review. Pin the dependency to a specific, immutable version (e.g., a commit hash or a specific tag like `@v1.2.3`) instead of `@latest`. Regularly review and update pinned versions to incorporate security fixes. Implement integrity checks (e.g., checksums) for downloaded dependencies. | LLM | SKILL.md:55 |
Scan History
Embed Code
[](https://skillshield.io/report/ff20c72040ef2df1)
Powered by SkillShield