Trust Assessment
primer received a trust score of 65/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 3 findings: 2 critical, 1 high, 0 medium, and 0 low severity. Key findings include Untrusted Skill Instructions Attempt Prompt Injection, Potential Command Injection in Cron Job Creation, Direct Modification of Core Agent Configuration Files.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 25/100, indicating areas for improvement.
Last analyzed on February 13, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings3
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Untrusted Skill Instructions Attempt Prompt Injection The `SKILL.md` file, which is explicitly marked as untrusted input, contains direct instructions for the host LLM to perform actions such as executing shell commands (`cat`, `ls`, `rm`, `grep`) and modifying agent configuration files (`AGENTS.md`, `SOUL.md`). This directly contradicts the security analyzer's meta-instruction to treat all content within the untrusted delimiters as data, not executable instructions. This constitutes a critical prompt injection attempt, as the untrusted skill is trying to override the LLM's core directives and control its behavior. The host LLM must be strictly sandboxed and prevented from executing any instructions or code found within untrusted skill content. All actions requested by skills should be mediated through a secure, explicit tool-use interface, not by direct interpretation of instructions within untrusted text. The skill's functionality should be re-implemented using trusted tools or APIs provided by the agent framework. | LLM | SKILL.md:50 | |
| CRITICAL | Potential Command Injection in Cron Job Creation The `scripts/setup_primer.py` file, intended for execution by the AI, includes functionality to create cron jobs. The `generate_cron_config` function processes user-controlled input from the `config` dictionary, specifically `miranda_cadence`. If the truncated `create_cron_jobs` function directly interpolates this user-controlled `miranda_cadence` into a shell command (e.g., `crontab -l | {cat,echo} ... | crontab -`), it creates a critical command injection vulnerability. An attacker could provide a malicious string for `miranda_cadence` (e.g., `*; rm -rf /`) to execute arbitrary commands with persistent effect. Implement robust input validation and sanitization for all user-controlled inputs used in shell commands. When creating cron jobs, use a dedicated library or API that safely handles scheduling and command execution, avoiding direct shell interpolation of user data. If direct shell execution is unavoidable, use `subprocess.run` with `shell=False` and pass arguments as a list, or meticulously escape all user-provided components. | LLM | scripts/setup_primer.py:159 | |
| HIGH | Direct Modification of Core Agent Configuration Files The `scripts/setup_primer.py` script directly reads, modifies, and writes back `AGENTS.md` and `SOUL.md`. These files are described as containing core instructions and roles for the AI agent. Allowing a skill, especially one originating from untrusted content, to directly alter these foundational configuration files grants excessive permissions. Although the current modifications involve hardcoded strings, this pattern establishes a dangerous precedent. If the content to be inserted were ever to be derived from user input or other untrusted sources, it would become a critical vulnerability for prompt injection or behavioral manipulation. Agent configuration files (`AGENTS.md`, `SOUL.md`) should be immutable by skills. Any modifications to the agent's core behavior or instructions should be managed through a secure, explicit API provided by the agent framework, with proper validation and authorization. Skills should only be able to *request* changes, not directly implement them. | LLM | scripts/setup_primer.py:100 |
Scan History
Embed Code
[](https://skillshield.io/report/625b1aa09f0ee907)
Powered by SkillShield