Trust Assessment
rss-ai-reader received a trust score of 61/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 3 findings: 1 critical, 0 high, 2 medium, and 0 low severity. Key findings include Sensitive environment variable access: $ANTHROPIC_API_KEY, Untrusted RSS content can lead to Host LLM Prompt Injection, Potential for unpinned dependencies in requirements.txt.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 63/100, indicating areas for improvement.
Last analyzed on February 13, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings3
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Untrusted RSS content can lead to Host LLM Prompt Injection The skill's core functionality involves processing untrusted content from external RSS feeds and using an internal LLM to generate summaries. If an attacker controls an RSS feed, they can inject malicious instructions into the feed's content (e.g., article titles or bodies). The internal LLM, when processing this content, could be manipulated to generate a summary that contains these malicious instructions. When this manipulated summary is returned to the host LLM (the AI agent invoking the skill), it could lead to prompt injection, allowing the attacker to manipulate the host LLM's behavior. Implement robust input sanitization and filtering for all RSS feed content before it is passed to the internal LLM. Employ strict prompt engineering techniques (e.g., system prompts, few-shot examples, output format constraints) and output filtering/validation for the summarization LLM to prevent it from generating or relaying malicious instructions. Consider using a dedicated, sandboxed LLM for summarization with strong guardrails. | LLM | SKILL.md:8 | |
| MEDIUM | Sensitive environment variable access: $ANTHROPIC_API_KEY Access to sensitive environment variable '$ANTHROPIC_API_KEY' detected in shell context. Verify this environment variable access is necessary and the value is not exfiltrated. | Static | skills/benzema216/rss-ai-reader/SKILL.md:47 | |
| MEDIUM | Potential for unpinned dependencies in requirements.txt The skill's installation instructions include `pip install -r requirements.txt`. Without a lock file or explicitly pinned versions for all dependencies in `requirements.txt`, there is a risk of supply chain attacks. Future versions of dependencies could introduce vulnerabilities, breaking changes, or even malicious code. An attacker could also perform dependency confusion attacks if package names are not carefully chosen or if private registries are not used. Pin all dependencies to exact versions in `requirements.txt` (e.g., `package==1.2.3`). Consider using a dependency lock file (e.g., `Pipfile.lock` with Pipenv or `poetry.lock` with Poetry) to ensure deterministic builds. Regularly audit dependencies for known vulnerabilities using tools like `pip-audit` or `Snyk`. | LLM | SKILL.md:18 |
Scan History
Embed Code
[](https://skillshield.io/report/442df2e85ce547e5)
Powered by SkillShield