Trust Assessment
sechecker received a trust score of 10/100, placing it in the Untrusted category. This skill has significant security findings that require attention before use in production.
SkillShield's automated analysis identified 5 findings: 3 critical, 2 high, 0 medium, and 0 low severity. Key findings include File read + network send exfiltration, Sensitive path access: AI agent config, Sensitive Data Exfiltration by Design.
The analysis covered 4 layers: manifest_analysis, llm_behavioral_safety, dependency_graph, static_code_analysis. The manifest_analysis layer scored lowest at 40/100, indicating areas for improvement.
Last analyzed on February 11, 2026 (commit dddd7c2c). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings5
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | File read + network send exfiltration .env file access Remove access to sensitive files not required by the skill's stated purpose. SSH keys, cloud credentials, and browser data should never be read by skills unless explicitly part of their declared functionality. | Unknown | /var/folders/1k/67b8r20n777f_xcmmm8b7m5h0000gn/T/skillscan-clone-hgzsojti/repo/normal-sechecker/scripts/sechecker.py:29 | |
| CRITICAL | File read + network send exfiltration AI agent config/credential file access Remove access to sensitive files not required by the skill's stated purpose. SSH keys, cloud credentials, and browser data should never be read by skills unless explicitly part of their declared functionality. | Unknown | /var/folders/1k/67b8r20n777f_xcmmm8b7m5h0000gn/T/skillscan-clone-hgzsojti/repo/normal-sechecker/SKILL.md:37 | |
| CRITICAL | Sensitive Data Exfiltration by Design The skill is designed to detect and report sensitive information (credentials, API keys, tokens, etc.) found in user-provided files. The `scripts/sechecker.py` explicitly includes the full matched sensitive content and its surrounding context in the output, which is then returned to the LLM. This means the LLM will receive, process, and potentially store highly sensitive user data, posing a significant privacy and security risk. Sensitive data should never be returned directly to the LLM. Instead, the skill should report *only* the presence of sensitive data, its type, and its location (file path and line number), without revealing the actual sensitive values. For example, it could report 'Found a HIGH severity API key at path/to/file.py:42' instead of 'Found: api_key: "sk-1234567890abcdef"'. If the user needs to see the actual value, it should be displayed directly to the user in a secure client-side environment, not via the LLM. | Unknown | scripts/sechecker.py:190 | |
| HIGH | Sensitive path access: AI agent config Access to AI agent config path detected: '~/.claude/'. This may indicate credential theft. Verify that access to this sensitive path is justified and declared. | Unknown | /var/folders/1k/67b8r20n777f_xcmmm8b7m5h0000gn/T/skillscan-clone-hgzsojti/repo/normal-sechecker/SKILL.md:37 | |
| HIGH | Potential Command Injection via User-Controlled Path The `SKILL.md` indicates that the skill is executed using a shell command: `python ~/.claude/skills/sechecker/scripts/sechecker.py <target_path>`. The `<target_path>` is user-controlled. If the LLM's execution environment constructs this command string by directly concatenating the user-provided path into a shell command (e.g., using `subprocess.run(..., shell=True)`), a malicious user could inject arbitrary shell commands. For example, a `target_path` like `/tmp/myproject; rm -rf /` could lead to arbitrary code execution. While the Python script itself uses `argparse` and `pathlib.Path` to safely handle the argument internally, the initial invocation described in the manifest is vulnerable if not properly sanitized by the LLM's runtime. The LLM's execution environment must ensure that user-provided arguments are properly escaped or passed as distinct arguments to the command interpreter (e.g., `subprocess.run(['python', 'script.py', user_input_path])` instead of `subprocess.run(f'python script.py {user_input_path}', shell=True)`). The skill developer should also consider adding input validation for the `target_path` within the Python script, although the primary defense should be at the execution layer. | Unknown | SKILL.md:30 |
Scan History
Embed Code
[](https://skillshield.io/report/65622ab7f90f059f)
Powered by SkillShield