Security Audit
ailabs-393/ai-labs-claude-skills:dist/skills/resume-manager
github.com/ailabs-393/ai-labs-claude-skillsTrust Assessment
ailabs-393/ai-labs-claude-skills:dist/skills/resume-manager received a trust score of 10/100, placing it in the Untrusted category. This skill has significant security findings that require attention before use in production.
SkillShield's automated analysis identified 4 findings: 2 critical, 2 high, 0 medium, and 0 low severity. Key findings include File read + network send exfiltration, Sensitive path access: AI agent config, Arbitrary File Write via Unsanitized Output Path.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The Static Code Analysis layer scored lowest at 40/100, indicating areas for improvement.
Last analyzed on March 14, 2026 (commit 1a12bc7a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings4
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | File read + network send exfiltration AI agent config/credential file access Remove access to sensitive files not required by the skill's stated purpose. SSH keys, cloud credentials, and browser data should never be read by skills unless explicitly part of their declared functionality. | Manifest | dist/skills/resume-manager/SKILL.md:601 | |
| CRITICAL | Arbitrary File Write via Unsanitized Output Path The `generate_resume` function in `scripts/pdf_generator.py` accepts an `output_path` argument which is directly used without sanitization in `reportlab.platypus.SimpleDocTemplate`. This allows an attacker to specify an arbitrary file path, potentially outside the intended `~/Downloads` directory, leading to an arbitrary file write vulnerability. An attacker could instruct the LLM to call this function with `output_path='/etc/passwd'` (or similar sensitive system paths) to attempt to overwrite system files with a PDF, leading to denial of service or other system compromises. Implement robust validation and sanitization for the `output_path` argument. Ensure the path is canonicalized and strictly confined to an allowed, non-sensitive directory (e.g., `~/Downloads`) using `pathlib.Path.resolve()` and checking against a base directory. Disallow absolute paths or paths containing traversal sequences (e.g., `../`). | Static | scripts/pdf_generator.py:103 | |
| HIGH | Sensitive path access: AI agent config Access to AI agent config path detected: '~/.claude/'. This may indicate credential theft. Verify that access to this sensitive path is justified and declared. | Static | dist/skills/resume-manager/SKILL.md:601 | |
| HIGH | Sensitive Data Exfiltration via 'export' Command The `scripts/resume_db.py` script exposes an `export` command via its command-line interface (`python3 scripts/resume_db.py export`). This command prints the entire contents of the user's sensitive resume database (including personal information, work experiences, projects, education, and skills) to standard output in JSON format. An attacker could craft a prompt to instruct the LLM to execute this command and then exfiltrate the output, leading to a complete compromise of the user's professional profile data. Implement stricter access controls or confirmation mechanisms for sensitive operations like data export. If export is necessary, ensure the output is handled securely and not easily exfiltrated by the LLM (e.g., by requiring explicit user confirmation or saving to a secure, non-exfiltratable location). Consider encrypting sensitive data at rest. | Static | scripts/resume_db.py:309 |
Scan History
Embed Code
[](https://skillshield.io/report/a1eabe11cddc5668)
Powered by SkillShield