Security Audit
Luispitik/sinapsis-3.2:skills/investigate-pro
github.com/Luispitik/sinapsis-3.2Trust Assessment
Luispitik/sinapsis-3.2:skills/investigate-pro received a trust score of 10/100, placing it in the Untrusted category. This skill has significant security findings that require attention before use in production.
SkillShield's automated analysis identified 11 findings: 5 critical, 5 high, 1 medium, and 0 low severity. Key findings include File read + network send exfiltration, Dangerous tool allowed: Bash, Sensitive path access: AI agent config.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The Manifest Analysis layer scored lowest at 10/100, indicating areas for improvement.
Last analyzed on April 9, 2026 (commit f405238d). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings11
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | File read + network send exfiltration AI agent config/credential file access Remove access to sensitive files not required by the skill's stated purpose. SSH keys, cloud credentials, and browser data should never be read by skills unless explicitly part of their declared functionality. | Manifest | skills/investigate-pro/SKILL.md:40 | |
| CRITICAL | File read + network send exfiltration AI agent config/credential file access Remove access to sensitive files not required by the skill's stated purpose. SSH keys, cloud credentials, and browser data should never be read by skills unless explicitly part of their declared functionality. | Manifest | skills/investigate-pro/SKILL.md:45 | |
| CRITICAL | File read + network send exfiltration AI agent config/credential file access Remove access to sensitive files not required by the skill's stated purpose. SSH keys, cloud credentials, and browser data should never be read by skills unless explicitly part of their declared functionality. | Manifest | skills/investigate-pro/SKILL.md:172 | |
| CRITICAL | Excessive Permissions Declared The skill declares an overly broad set of permissions, including 'Bash', 'Edit', and 'Write'. These permissions grant the AI agent extensive control over the filesystem and command execution, significantly increasing the attack surface. While the skill attempts to implement 'Scope Freeze' rules, the underlying capability remains, making it vulnerable to misuse if the agent's instructions are compromised or if it misinterprets its own rules. Review and restrict the 'allowed-tools' to the absolute minimum necessary for the skill's function. Consider using more granular tools or wrappers that enforce stricter boundaries than raw 'Bash', 'Edit', or 'Write'. | LLM | SKILL.md:1 | |
| CRITICAL | Potential Command Injection via Dynamic Bash Execution The skill instructs the LLM to execute Bash commands where parts of the command string are dynamically generated by the LLM itself (e.g., `{brief}` for `root_cause`). If the LLM is prompted to include malicious commands in these dynamic parts, or if it generates them due to misinterpretation, it could lead to arbitrary command execution. Specifically, the instruction to log to `_timeline-log.sh` with a dynamically generated `root_cause` is vulnerable. Additionally, instructions like 'grep for error-related keywords' and 'Add a `console.log`, read a value, run a specific test' imply dynamic command/code generation and execution, which are high-risk vectors if not properly sanitized. Implement strict sanitization or validation for any LLM-generated content that is used in shell commands. Prefer using dedicated API calls or safer wrappers instead of direct Bash execution for dynamic content. For logging, ensure the `{brief}` content is escaped or passed as a separate argument to prevent command injection. | LLM | SKILL.md:190 | |
| HIGH | Dangerous tool allowed: Bash The skill allows the 'Bash' tool without constraints. This grants arbitrary command execution. Remove unconstrained shell/exec tools from allowed-tools, or add specific command constraints. | Static | skills/investigate-pro/SKILL.md:1 | |
| HIGH | Sensitive path access: AI agent config Access to AI agent config path detected: '~/.claude/'. This may indicate credential theft. Verify that access to this sensitive path is justified and declared. | Static | skills/investigate-pro/SKILL.md:40 | |
| HIGH | Sensitive path access: AI agent config Access to AI agent config path detected: '~/.claude/'. This may indicate credential theft. Verify that access to this sensitive path is justified and declared. | Static | skills/investigate-pro/SKILL.md:45 | |
| HIGH | Sensitive path access: AI agent config Access to AI agent config path detected: '~/.claude/'. This may indicate credential theft. Verify that access to this sensitive path is justified and declared. | Static | skills/investigate-pro/SKILL.md:172 | |
| HIGH | Credential Harvesting Risk via Explicit File Reads The skill explicitly instructs the AI agent to read potentially sensitive files such as `.env` and `config` files during Phase 2 (Analyze). These files commonly contain API keys, database credentials, and other secrets. While the skill's intent is for debugging, making these files accessible to the LLM creates a direct risk of credential harvesting if the LLM is compromised or manipulated to extract and expose this information. Avoid instructing the LLM to directly read files known to contain credentials. Instead, provide a mechanism to securely retrieve and mask sensitive information, or use a dedicated tool that can redact credentials before presenting content to the LLM. If direct access is unavoidable, ensure robust output filtering and user confirmation for any sensitive data. | LLM | SKILL.md:100 | |
| MEDIUM | Broad Data Access Increases Exfiltration Risk The skill instructs the LLM to read a wide range of local files, including 'ALL affected files completely', `git log`, `git diff`, `CLAUDE.md`, `_instincts-index.json`, and `_instinct-proposals.json`. While necessary for debugging, this broad access to local repository and system files means a significant amount of potentially sensitive project data is made available to the LLM. If the LLM were to be compromised or manipulated, this data could be exfiltrated or misused. Implement stricter controls on what files the LLM can access, perhaps by whitelisting specific file types or directories. Ensure that any output or summary generated by the LLM is reviewed for sensitive information before being presented to the user or logged. Consider using a sandboxed environment for file operations. | LLM | SKILL.md:60 |
Scan History
Embed Code
[](https://skillshield.io/report/43a4457bd654355c)
Powered by SkillShield