Trust Assessment
cirf received a trust score of 65/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 2 findings: 1 critical, 1 high, 0 medium, and 0 low severity. Key findings include Untrusted SKILL.md attempts to instruct the host LLM, Untrusted skill instructs AI to read and write arbitrary files.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 55/100, indicating areas for improvement.
Last analyzed on February 13, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings2
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Untrusted SKILL.md attempts to instruct the host LLM The `SKILL.md` file, which is explicitly marked as untrusted input, contains direct instructions for the AI agent (e.g., 'This file contains complete instructions for AI agents...', 'Follow these file instructions precisely.'). If the host LLM follows these instructions, it is being manipulated by untrusted content, which is a direct prompt injection. This violates the core security principle of not executing commands from untrusted sources. The host LLM must be strictly sandboxed from executing instructions found within untrusted skill definitions. The skill definition should be treated as descriptive metadata, not executable commands for the host LLM. The AI should not interpret instructions within the `SKILL.md` as commands to itself. | LLM | SKILL.md:5 | |
| HIGH | Untrusted skill instructs AI to read and write arbitrary files The untrusted `SKILL.md` instructs the AI to read various files (e.g., `framework/core-config.yaml`, `framework/agents/*.yaml`, `workspaces/{project-id}/workspace.yaml`, `documents/`, `outputs/`) and to perform file system operations like creating directories and copying files. If the host LLM follows these instructions from untrusted input, it could be coerced into reading sensitive configuration files, user data, or writing to arbitrary locations, leading to data exfiltration or unauthorized modification of the file system. The paths are dynamic (`{project-id}`, `{agent-id}`, `{workflow-id}`), increasing the risk of path traversal if not properly sanitized by the underlying tools. Restrict the AI's file system access to only explicitly allowed and sandboxed directories. Implement strict input validation and sanitization for all file paths derived from untrusted input to prevent path traversal vulnerabilities. Ensure that the AI's file operations are mediated by a secure tool that enforces these restrictions. | LLM | SKILL.md:60 |
Scan History
Embed Code
[](https://skillshield.io/report/5b35e5f66a2be546)
Powered by SkillShield