Trust Assessment
ontology received a trust score of 81/100, placing it in the Mostly Trusted category. This skill has passed most security checks with only minor considerations noted.
SkillShield's automated analysis identified 2 findings: 0 critical, 1 high, 1 medium, and 0 low severity. Key findings include Shell Command Injection via Skill Arguments, Insecure Storage of Credentials Due to Missing Schema Enforcement.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on February 12, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings2
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| HIGH | Shell Command Injection via Skill Arguments The skill's `SKILL.md` demonstrates invocation of `scripts/ontology.py` using `python3` with arguments (`--type`, `--props`, `--where`, etc.) that are expected to be derived from LLM output. If the LLM generates shell metacharacters (e.g., `;`, `|`, `&`, `$()`, `` ` ``) within these arguments, they could be interpreted and executed by the shell before being passed to the Python script. This allows for arbitrary command execution on the host system with the privileges of the agent. When invoking external scripts with LLM-generated arguments, use a safer execution method that avoids shell interpretation. For Python, this typically means using `subprocess.run()` with `shell=False` and passing arguments as a list (e.g., `subprocess.run(['python3', 'scripts/ontology.py', 'create', '--type', type_name, ...])`). Additionally, rigorously sanitize or escape all LLM-generated arguments before passing them to any shell command. | LLM | SKILL.md:79 | |
| MEDIUM | Insecure Storage of Credentials Due to Missing Schema Enforcement The `SKILL.md` explicitly defines a `Credential` type with `forbidden_properties: [password, secret, token]` to prevent direct storage of sensitive data, advocating for `secret_ref` indirection. However, the `scripts/ontology.py` functions (`create_entity`, `update_entity`) directly store all provided `properties` without enforcing these schema-level constraints. This allows an attacker or a misconfigured LLM to store plaintext credentials (e.g., `password`, `token`) directly in `memory/ontology/graph.jsonl`, bypassing the intended security measure and leading to insecure credential storage. Implement schema validation within `scripts/ontology.py` (e.g., in `create_entity` and `update_entity` functions) to enforce `forbidden_properties` and other constraints defined in `memory/ontology/schema.yaml` before writing to `graph.jsonl`. Ensure that sensitive properties are rejected, masked, or transformed into references as per the schema definition. | LLM | scripts/ontology.py:80 |
Scan History
Embed Code
[](https://skillshield.io/report/17d2ec367103bfdd)
Powered by SkillShield