Trust Assessment
knowledge-graph received a trust score of 95/100, placing it in the Trusted category. This skill has passed all critical security checks and demonstrates strong security practices.
SkillShield's automated analysis identified 1 finding: 0 critical, 0 high, 1 medium, and 0 low severity. Key findings include User-controlled fact written to summary.md without LLM-specific sanitization.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on February 12, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings1
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| MEDIUM | User-controlled fact written to summary.md without LLM-specific sanitization The `kg.py` script accepts a user-provided `--fact` argument, which is then written directly into `items.json` and subsequently incorporated into `summary.md` via the `build_summary` function. If `summary.md` is later used as context for another Language Model (LLM), a malicious `fact` containing prompt injection instructions could manipulate that LLM. The script does not apply any sanitization or escaping to the `fact` content to mitigate this risk for LLM consumption. Implement sanitization or escaping for the `fact` content before writing it to `summary.md`, specifically considering its potential use as LLM context. This might involve removing or escaping characters commonly used in prompt injection (e.g., backticks, specific keywords, or using a dedicated LLM-safe markdown renderer). Alternatively, clearly document that `summary.md` should not be directly fed to an LLM without further processing. | LLM | scripts/kg.py:158 |
Scan History
Embed Code
[](https://skillshield.io/report/1ba55f56efe37e9c)
Powered by SkillShield