Trust Assessment
content-recycler received a trust score of 65/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 4 findings: 2 critical, 1 high, 0 medium, and 1 low severity. Key findings include Arbitrary File Read via --input argument, Arbitrary File Write via --output and --output-dir arguments, Unrestricted Filesystem Access.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 23/100, indicating areas for improvement.
Last analyzed on February 12, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings4
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Arbitrary File Read via --input argument The skill's Python scripts (`generate_calendar.py`, `recycle_content.py`, `to_linkedin_post.py`, `to_twitter_thread.py`) directly use the user-provided `--input` argument as a file path without validation. This allows an attacker to read arbitrary files from the system where the skill is executed, potentially leading to data exfiltration of sensitive information (e.g., `/etc/passwd`, `~/.ssh/id_rsa`, API keys). Implement strict validation and sanitization for file paths provided via command-line arguments. Restrict file access to a designated sandbox directory. If reading arbitrary files is necessary, ensure the skill runs in a highly restricted environment with minimal filesystem access. | LLM | scripts/generate_calendar.py:39 | |
| CRITICAL | Arbitrary File Write via --output and --output-dir arguments The skill's Python scripts (`generate_calendar.py`, `recycle_content.py`, `to_linkedin_post.py`, `to_twitter_thread.py`) allow writing to arbitrary file paths specified by the `--output` or `--output-dir` arguments. An attacker can use path traversal (e.g., `../../`) or absolute paths to write to sensitive locations or overwrite existing files on the system, leading to data corruption, denial of service, or potentially remote code execution if system configuration files are targeted. Implement strict validation and sanitization for file paths provided via command-line arguments. Restrict file writing to a designated sandbox directory. Prevent path traversal characters (e.g., `..`) in output paths. | LLM | scripts/generate_calendar.py:52 | |
| HIGH | Unrestricted Filesystem Access The skill's design inherently allows reading from and writing to arbitrary locations on the filesystem based on user-provided input paths (`--input`, `--output`, `--output-dir`). While the immediate exploit is covered by arbitrary file read/write, the underlying issue is that the skill operates with broad filesystem permissions without any sandboxing or path validation. This amplifies the impact of any file I/O vulnerabilities. Implement a robust sandboxing mechanism for skill execution. Restrict file I/O operations to a predefined, isolated directory. All user-provided paths must be strictly validated and resolved within this sandbox. | LLM | scripts/recycle_content.py:20 | |
| LOW | Generated Content May Lead to Downstream Prompt Injection The skill generates various forms of content (e.g., social media posts, email teasers) by directly embedding user-provided input (`content`) into structured text. If this generated content is subsequently fed into another LLM or a system that interprets text (e.g., a social media platform's AI moderation, an email client's smart reply feature), it could be manipulated by a malicious user to perform prompt injection attacks on those downstream systems. The skill itself does not interpret the generated content, but its output can be weaponized. Advise users of the skill to carefully review generated content before publishing or feeding it to other AI systems. Implement content filtering or sanitization on the generated output if it's intended for further automated processing by sensitive systems. | LLM | scripts/recycle_content.py:30 |
Scan History
Embed Code
[](https://skillshield.io/report/09896c3cbcf0ccfa)
Powered by SkillShield