Security Audit
ailabs-393/ai-labs-claude-skills:dist/skills/tech-debt-analyzer
github.com/ailabs-393/ai-labs-claude-skillsTrust Assessment
ailabs-393/ai-labs-claude-skills:dist/skills/tech-debt-analyzer received a trust score of 43/100, placing it in the Untrusted category. This skill has significant security findings that require attention before use in production.
SkillShield's automated analysis identified 4 findings: 0 critical, 4 high, 0 medium, and 0 low severity. Key findings include Unsafe deserialization / dynamic eval, Dangerous call: __import__(), Potential Data Exfiltration via Arbitrary File Reading in detect_code_smells.py.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on March 14, 2026 (commit 1a12bc7a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings4
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| HIGH | Unsafe deserialization / dynamic eval Python builtins/import manipulation Remove obfuscated code execution patterns. Legitimate code does not need base64-encoded payloads executed via eval, encrypted-then-executed blobs, or dynamic attribute resolution to call system functions. | Manifest | dist/skills/tech-debt-analyzer/scripts/detect_code_smells.py:311 | |
| HIGH | Dangerous call: __import__() Call to '__import__()' detected in function 'format_markdown_report'. This can execute arbitrary code. Avoid using dangerous functions like exec/eval/os.system. Use safer alternatives. | Static | dist/skills/tech-debt-analyzer/scripts/detect_code_smells.py:311 | |
| HIGH | Potential Data Exfiltration via Arbitrary File Reading in detect_code_smells.py The `detect_code_smells.py` script is designed to take a source directory path (`src_dir`) as a command-line argument (`sys.argv[1]`). It then uses `Path(src_dir).rglob('*')` to recursively iterate through files and `file_path.read_text()` to read their content. The `SKILL.md` documentation explicitly describes running this script with a `src` argument. If the skill's `index.js` (currently a placeholder with a `TODO` comment) were to pass an untrusted, user-controlled input for `src_dir` to this script, an attacker could specify arbitrary directories (e.g., `/`, `/etc`, `/root`) to read sensitive files from the system. This constitutes a significant data exfiltration risk and indicates excessive filesystem permissions for the skill's intended operation. While the script includes some filtering in `_should_analyze`, it is not robust enough to prevent reading sensitive files outside the intended project scope if `src_dir` is manipulated. Implement strict validation and sanitization for all file path inputs. Restrict the `src_dir` argument to a specific, sandboxed directory within the skill's intended scope. Avoid passing user-controlled input directly to file system traversal functions. If the skill is meant to analyze user-provided code, ensure it operates within a secure, isolated environment with minimal permissions. Consider using a dedicated file access API that enforces strict boundaries rather than direct path manipulation. | LLM | scripts/detect_code_smells.py:20 | |
| HIGH | Potential Data Exfiltration via Arbitrary File Reading in analyze_dependencies.py The `analyze_dependencies.py` script is designed to take a `package.json` file path (`package_json_path`) as a command-line argument (`sys.argv[1]`). It then attempts to open and read this file using `open(self.package_json_path, 'r')`. The `SKILL.md` documentation explicitly describes running this script with a `package.json` argument. If the skill's `index.js` (currently a placeholder with a `TODO` comment) were to pass an untrusted, user-controlled input for `package_json_path` to this script, an attacker could specify arbitrary file paths (e.g., `/etc/passwd`, `/root/.ssh/id_rsa`, `/proc/self/environ`) to read sensitive files from the system. Although the script expects JSON content, the act of reading the file itself is a data exfiltration vulnerability. This also indicates excessive filesystem permissions for the skill's intended operation. Implement strict validation and sanitization for all file path inputs. Restrict the `package_json_path` argument to a specific, sandboxed directory within the skill's intended scope. Avoid passing user-controlled input directly to file system read operations. If the skill is meant to analyze user-provided package files, ensure it operates within a secure, isolated environment with minimal permissions. Consider using a dedicated file access API that enforces strict boundaries rather than direct path manipulation. | LLM | scripts/analyze_dependencies.py:20 |
Scan History
Embed Code
[](https://skillshield.io/report/b58ee7d63bdaed46)
Powered by SkillShield