Trust Assessment
continuity received a trust score of 65/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 3 findings: 1 critical, 2 high, 0 medium, and 0 low severity. Key findings include Prompt Injection via LLM Interaction, Arbitrary File Read via Command Line Argument, Data Exfiltration Chain: Arbitrary File Read + LLM Prompt Injection.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 40/100, indicating areas for improvement.
Last analyzed on February 14, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings3
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Data Exfiltration Chain: Arbitrary File Read + LLM Prompt Injection This finding combines the arbitrary file read vulnerability (SS-FILE-001) with the LLM prompt injection vulnerability (SS-LLM-001). An attacker could exploit SS-FILE-001 to read a sensitive file (e.g., `/etc/passwd`, API keys, configuration files) by providing its path via `--session`. The content of this file would then be passed as `session_content` to the `analyze_session` function, which sends it to an LLM. By crafting a malicious prompt within the `session_content` (or by manipulating the LLM's prompt if the file content is directly included), the attacker could instruct the LLM to reveal the sensitive file's contents, leading to critical data exfiltration. Address both the arbitrary file read vulnerability (SS-FILE-001) and the LLM prompt injection vulnerability (SS-LLM-001) comprehensively. Ensure that file access is strictly controlled and that any data passed to an LLM, especially from untrusted sources, is thoroughly sanitized and processed within a secure, sandboxed environment with robust guardrails against information disclosure. | LLM | scripts/continuity.py:106 | |
| HIGH | Prompt Injection via LLM Interaction The `analyze_session` function is explicitly designed to send `session_content` (which can originate from untrusted user input, potentially via `args.session`) to an LLM for analysis. Without robust input sanitization and LLM guardrails, malicious content within `session_content` could manipulate the LLM's behavior, leading to prompt injection attacks, unintended actions, or information disclosure. Implement strict input sanitization and validation for `session_content` before it is passed to the LLM. Employ LLM-specific guardrails, such as content filtering, output validation, and sandboxing, to mitigate prompt injection risks. Consider using a dedicated, hardened LLM endpoint for processing untrusted user input. | LLM | scripts/continuity.py:98 | |
| HIGH | Arbitrary File Read via Command Line Argument The `cmd_reflect` function allows specifying an arbitrary file path via the `--session` command-line argument. If this skill is invoked by an automated system (e.g., an LLM) where the `--session` argument can be influenced by untrusted user input, an attacker could read any file accessible to the skill's execution context. This could lead to unauthorized access to sensitive system files or user data. Restrict the `--session` argument to only accept file paths within a predefined, secure directory. Implement path validation to prevent directory traversal attacks (e.g., `../`). If the skill is intended to be called by an LLM, ensure the LLM cannot inject arbitrary file paths into this argument. | LLM | scripts/continuity.py:106 |
Scan History
Embed Code
[](https://skillshield.io/report/d4164ae166f519b6)
Powered by SkillShield