Security Audit
ailabs-393/ai-labs-claude-skills:dist/skills/test-specialist
github.com/ailabs-393/ai-labs-claude-skillsTrust Assessment
ailabs-393/ai-labs-claude-skills:dist/skills/test-specialist received a trust score of 63/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 3 findings: 0 critical, 2 high, 1 medium, and 0 low severity. Key findings include Unsafe deserialization / dynamic eval, Potential Data Exfiltration via Arbitrary File Read, Excessive Permissions via Arbitrary Directory Traversal/Scanning.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on March 14, 2026 (commit 1a12bc7a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings3
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| HIGH | Unsafe deserialization / dynamic eval Decryption followed by code execution Remove obfuscated code execution patterns. Legitimate code does not need base64-encoded payloads executed via eval, encrypted-then-executed blobs, or dynamic attribute resolution to call system functions. | Manifest | dist/skills/test-specialist/scripts/analyze_coverage.py:5 | |
| HIGH | Potential Data Exfiltration via Arbitrary File Read The `analyze_coverage.py` script takes a `coverage-file` path as an argument and attempts to read its content using `json.load()`. If a malicious actor can control the `coverage-file` argument provided to the skill, they could instruct the LLM to read arbitrary files on the filesystem (e.g., `/etc/passwd`, `~/.ssh/id_rsa`, or other sensitive configuration files). Although the script expects JSON, it will still attempt to open and read any file, potentially leaking its content if it's valid JSON or causing an error that reveals file existence/permissions. Implement robust input validation for the `coverage-file` argument. Restrict file paths to expected coverage report locations or within a designated sandbox directory. Avoid directly using user-provided paths without sanitization or whitelisting. If the skill is intended to be used by an LLM, ensure the LLM's invocation of this script includes strict path validation or sandboxing. | Static | scripts/analyze_coverage.py:20 | |
| MEDIUM | Excessive Permissions via Arbitrary Directory Traversal/Scanning The `find_untested_code.py` script takes a `src-dir` path as an argument and recursively scans it for source and test files. If a malicious actor can control the `src-dir` argument provided to the skill, they could instruct the LLM to scan arbitrary directories on the filesystem (e.g., `/`, `/home/user`, `/var/log`). While the script does not read file *content*, it reveals file *names* and directory structures, which can be sensitive information and aid in reconnaissance for further attacks. Implement robust input validation for the `src-dir` argument. Restrict directory paths to expected project source directories or within a designated sandbox. Avoid directly using user-provided paths without sanitization or whitelisting. If the skill is intended to be used by an LLM, ensure the LLM's invocation of this script includes strict path validation or sandboxing. | Static | scripts/find_untested_code.py:20 |
Scan History
Embed Code
[](https://skillshield.io/report/e7596802ac257597)
Powered by SkillShield