Security Audit
ailabs-393/ai-labs-claude-skills:packages/skills/tech-debt-analyzer
github.com/ailabs-393/ai-labs-claude-skillsTrust Assessment
ailabs-393/ai-labs-claude-skills:packages/skills/tech-debt-analyzer received a trust score of 43/100, placing it in the Untrusted category. This skill has significant security findings that require attention before use in production.
SkillShield's automated analysis identified 4 findings: 0 critical, 4 high, 0 medium, and 0 low severity. Key findings include Unsafe deserialization / dynamic eval, Dangerous call: __import__(), Potential for Command Injection through LLM-orchestrated script execution.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on March 14, 2026 (commit 1a12bc7a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings4
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| HIGH | Unsafe deserialization / dynamic eval Python builtins/import manipulation Remove obfuscated code execution patterns. Legitimate code does not need base64-encoded payloads executed via eval, encrypted-then-executed blobs, or dynamic attribute resolution to call system functions. | Manifest | packages/skills/tech-debt-analyzer/scripts/detect_code_smells.py:311 | |
| HIGH | Dangerous call: __import__() Call to '__import__()' detected in function 'format_markdown_report'. This can execute arbitrary code. Avoid using dangerous functions like exec/eval/os.system. Use safer alternatives. | Static | packages/skills/tech-debt-analyzer/scripts/detect_code_smells.py:311 | |
| HIGH | Potential for Command Injection through LLM-orchestrated script execution The `SKILL.md` documentation explicitly describes the skill's workflow, which involves the execution of Python scripts using `python3 scripts/detect_code_smells.py [src-dir]` and `python3 scripts/analyze_dependencies.py [package.json-path]`. If the LLM is prompted to orchestrate these commands and constructs the shell command string by directly embedding user-provided input for `[src-dir]` or `[package.json-path]` without proper sanitization, a malicious user could inject arbitrary shell commands (e.g., `src; rm -rf /`). This creates a credible path for command injection, where the LLM acts as an intermediary, executing untrusted commands on the host system. The LLM orchestration layer responsible for invoking this skill should implement strict validation and sanitization of any user-provided arguments before constructing and executing shell commands. A safer approach would be for the skill's `index.js` to directly invoke these Python scripts using a secure method (e.g., `child_process.spawn` in Node.js, passing arguments as an array) rather than relying on the LLM to construct and execute a raw shell string. | LLM | SKILL.md:20 | |
| HIGH | Excessive file system access and potential data exfiltration via `detect_code_smells.py` The `scripts/detect_code_smells.py` script is designed to recursively read and analyze files from a user-specified source directory (`src-dir`) using `self.src_dir.rglob('*')`. If the LLM is prompted to invoke this script with a broad or sensitive directory path (e.g., `/`, `../`, `/etc`, `/home/user`) as `src-dir`, the script will attempt to read all files within that scope. While the script primarily extracts metadata (file paths, line counts, function names, debt markers), exposing the file structure and metadata of sensitive directories constitutes a form of data exfiltration and an excessive permission. The output report, containing this information, would then be returned to the LLM, making sensitive system information accessible. Implement strict validation and sanitization of the `src-dir` argument to restrict it to expected project directories (e.g., `src`, `lib`). Avoid allowing paths that could lead to arbitrary file system traversal. Consider executing such scripts within a sandboxed environment with limited file system access. Ensure the LLM's invocation logic for this script explicitly restricts `src-dir` to non-sensitive and project-specific paths. | LLM | scripts/detect_code_smells.py:30 |
Scan History
Embed Code
[](https://skillshield.io/report/a203a9dac9f58bf9)
Powered by SkillShield