Trust Assessment
news-aggregator-skill received a trust score of 76/100, placing it in the Mostly Trusted category. This skill has passed most security checks with only minor considerations noted.
SkillShield's automated analysis identified 2 findings: 0 critical, 2 high, 0 medium, and 0 low severity. Key findings include Potential Command Injection via user-derived keywords, Prompt Injection via untrusted news content analysis.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on February 13, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings2
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| HIGH | Potential Command Injection via user-derived keywords The skill instructs the agent to execute `python3 scripts/fetch_news.py` and construct the `--keyword` argument based on user input (either through 'Smart Keyword Expansion' or 'Specific Keyword Search'). If the user's input contains shell metacharacters, and the agent does not properly sanitize or escape this input before passing it to the shell command, it could lead to arbitrary command execution. Implement robust input sanitization and escaping for all user-derived arguments passed to shell commands. Use a library function for shell escaping (e.g., `shlex.quote` in Python) or pass arguments as a list to `subprocess.run` to avoid shell interpretation. | LLM | SKILL.md:26 | |
| HIGH | Prompt Injection via untrusted news content analysis The skill explicitly instructs the agent to perform 'Deep Analysis' and 'Deep Interpretation' using its AI capabilities on the `content` field of news articles. This `content` is fetched from external, untrusted sources (enabled by the `--deep` argument to `fetch_news.py`). Malicious actors could embed instructions or manipulative text within news articles, potentially hijacking the agent's subsequent actions or responses. Implement strict input validation and sanitization for all untrusted content before it is processed by the LLM. Consider using a separate, sandboxed LLM instance for processing untrusted inputs, or employ techniques like instruction-following filters, content moderation APIs, or explicit user confirmation for sensitive actions derived from untrusted content. | LLM | SKILL.md:70 |
Scan History
Embed Code
[](https://skillshield.io/report/7a22c365cda75281)
Powered by SkillShield