Trust Assessment
hokipoki received a trust score of 70/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 3 findings: 0 critical, 2 high, 1 medium, and 0 low severity. Key findings include Unpinned global NPM dependency, Skill requests broad filesystem access for remote processing, Potential for command injection via CLI arguments.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 63/100, indicating areas for improvement.
Last analyzed on February 14, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings3
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| HIGH | Unpinned global NPM dependency The skill instructs the user to install `@next-halo/hokipoki-cli` globally without specifying a version. This makes the installation vulnerable to supply chain attacks, where a malicious update to the package could be automatically installed, potentially compromising the user's system. Global installations also increase the attack surface by making the package available system-wide. Pin the dependency to a specific, known-good version (e.g., `npm install -g @next-halo/hokipoki-cli@1.2.3`). Regularly audit the package for vulnerabilities. Consider using a package manager that supports lock files for reproducible builds. | LLM | SKILL.md:10 | |
| HIGH | Skill requests broad filesystem access for remote processing The `hokipoki request` command allows the LLM to construct commands that send `--files`, `--dir`, or even `--all` (the entire project) to a remote AI model. While the skill claims 'API keys never leave the provider's machine; only encrypted requests and results are exchanged' and 'isolated Docker containers', this still means the local `hokipoki` CLI will read potentially sensitive user data from the filesystem and transmit it. An attacker could craft a prompt to trick the LLM into specifying and sending sensitive files (e.g., `~/.ssh/id_rsa`, `/etc/passwd`) if the `hokipoki` CLI doesn't adequately restrict file access or if the LLM is manipulated to specify such paths. The `--all` option is particularly broad. Implement strict validation and sanitization of file paths provided by the user or LLM before passing them to the `hokipoki` CLI. Restrict the scope of files/directories that can be accessed by the skill to only those explicitly necessary for its function. Consider adding a user confirmation step for operations involving broad filesystem access or sensitive file types. The `hokipoki` CLI itself should enforce strict access controls. | LLM | SKILL.md:19 | |
| MEDIUM | Potential for command injection via CLI arguments The skill relies heavily on executing `hokipoki` CLI commands with arguments constructed by the LLM based on user input. If user input is not properly sanitized or validated before being incorporated into CLI arguments (e.g., task descriptions, file paths, workspace names), it could lead to command injection vulnerabilities. For example, a malicious user input in the `--task` argument could potentially break out of the string and execute arbitrary commands if the `hokipoki` CLI or the underlying shell execution mechanism is vulnerable. Ensure that all user-provided inputs used in CLI arguments are strictly validated and properly escaped or quoted to prevent shell injection. The `hokipoki` CLI itself should be robust against malicious arguments. The LLM should be instructed to sanitize inputs before constructing commands. | LLM | SKILL.md:19 |
Scan History
Embed Code
[](https://skillshield.io/report/f650c873d3a0338d)
Powered by SkillShield