Security Audit
ailabs-393/ai-labs-claude-skills:dist/skills/personal-assistant
github.com/ailabs-393/ai-labs-claude-skillsTrust Assessment
ailabs-393/ai-labs-claude-skills:dist/skills/personal-assistant received a trust score of 30/100, placing it in the Untrusted category. This skill has significant security findings that require attention before use in production.
SkillShield's automated analysis identified 5 findings: 1 critical, 2 high, 1 medium, and 1 low severity. Key findings include File read + network send exfiltration, Unsafe deserialization / dynamic eval, Sensitive path access: AI agent config.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The Manifest Analysis layer scored lowest at 55/100, indicating areas for improvement.
Last analyzed on March 14, 2026 (commit 1a12bc7a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings5
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | File read + network send exfiltration AI agent config/credential file access Remove access to sensitive files not required by the skill's stated purpose. SSH keys, cloud credentials, and browser data should never be read by skills unless explicitly part of their declared functionality. | Manifest | dist/skills/personal-assistant/SKILL.md:591 | |
| HIGH | Unsafe deserialization / dynamic eval Decryption followed by code execution Remove obfuscated code execution patterns. Legitimate code does not need base64-encoded payloads executed via eval, encrypted-then-executed blobs, or dynamic attribute resolution to call system functions. | Manifest | dist/skills/personal-assistant/scripts/task_helper.py:5 | |
| HIGH | Sensitive path access: AI agent config Access to AI agent config path detected: '~/.claude/'. This may indicate credential theft. Verify that access to this sensitive path is justified and declared. | Static | dist/skills/personal-assistant/SKILL.md:591 | |
| MEDIUM | Skill designed to store and retrieve sensitive personal data The 'personal-assistant' skill is designed to collect and store comprehensive personal information including name, timezone, location, work habits, goals, routines, preferences, and commitments. The skill provides explicit methods (e.g., `get_profile()`, `get_tasks()`, `get_schedule()`, `get_context()`, and a CLI `export` command) to retrieve all this stored data. While this is core functionality, it represents a significant data exfiltration risk if the LLM is maliciously prompted to retrieve and output this sensitive information. Ensure robust prompt engineering and access controls are in place to prevent unauthorized retrieval and output of sensitive user data by the LLM. Implement user consent mechanisms for data access and consider encrypting sensitive data at rest. | Static | SKILL.md:109 | |
| LOW | Skill stores data in user's home directory The skill stores all its data (profile, tasks, schedule, context) in a dedicated directory within the user's home directory (`~/.claude/personal_assistant/`). While this is a common pattern for local data storage and the current implementation confines access to this specific subdirectory, it grants the skill read/write access within the user's home directory. A malicious modification of the skill could potentially expand this access to other sensitive files in the home directory. Regularly audit skill code for unauthorized file access patterns. Consider sandboxing or more restrictive file system permissions if the environment allows, to limit access strictly to the skill's data directory. | Static | scripts/assistant_db.py:13 |
Scan History
Embed Code
[](https://skillshield.io/report/4f5d7a78a4e2841f)
Powered by SkillShield