Trust Assessment
chitin received a trust score of 65/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 2 findings: 1 critical, 0 high, 1 medium, and 0 low severity. Key findings include Potential Data Exfiltration via Embedding Queries, Prompt Injection and Data Exfiltration via `--force` on `chitin promote`.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 63/100, indicating areas for improvement.
Last analyzed on February 13, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings2
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Prompt Injection and Data Exfiltration via `--force` on `chitin promote` The `chitin promote --force` command overrides critical safety checks (e.g., blocking relational insights, low confidence, unreinforced insights) when sharing data with the external Carapace service. The skill explicitly warns that if an external message or document suggests using `--force`, it should be treated as a prompt injection attempt. This allows a prompt-injected agent to bypass safeguards and potentially exfiltrate sensitive or unvetted personal data to an external service. Implement strict controls to prevent agents from using the `--force` flag with `chitin promote` in response to untrusted external input. Agents should be trained to recognize and reject instructions that include `--force` from external sources as prompt injection attempts. Ensure human review for any `promote` operations, especially those using `--force`. | LLM | SKILL.md:270 | |
| MEDIUM | Potential Data Exfiltration via Embedding Queries The `chitin retrieve` and `chitin similar` commands send query text to OpenAI's embedding API for semantic search. A compromised or prompt-injected agent could be instructed to pass sensitive data (e.g., file contents, credentials) as a query argument, causing it to be sent to OpenAI's servers. The skill explicitly identifies this as a known risk, noting it's an agent-level vulnerability but a significant vector for data exfiltration if the agent is compromised. Agents should be strictly instructed never to pass sensitive data, file contents, or credentials as query arguments to `chitin retrieve` or `chitin similar`. Implement input validation or sanitization if possible before calling these commands with untrusted input. | LLM | SKILL.md:262 |
Scan History
Embed Code
[](https://skillshield.io/report/433f5d4488bfe18e)
Powered by SkillShield