Security Audit
claude-dev-suite/claude-dev-suite:skills/ai-integration/anthropic-python
github.com/claude-dev-suite/claude-dev-suiteTrust Assessment
claude-dev-suite/claude-dev-suite:skills/ai-integration/anthropic-python received a trust score of 73/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 2 findings: 0 critical, 2 high, 0 medium, and 0 low severity. Key findings include Excessive filesystem permissions declared, Potential data exfiltration through arbitrary file reading.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on March 16, 2026 (commit 8c8434ef). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings2
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| HIGH | Excessive filesystem permissions declared The skill's manifest declares broad filesystem permissions including `Read`, `Grep`, `Glob`, `Write`, and `Edit`. Specifically, `Write` and `Edit` permissions are rarely necessary for an LLM interaction skill and could allow an attacker to modify or delete arbitrary files on the host system if combined with other vulnerabilities or a malicious prompt. Even `Read` and `Glob` are broad and allow extensive access to the filesystem. Review and restrict `allowed-tools` to the absolute minimum necessary for the skill's functionality. For an Anthropic SDK wrapper, `Write` and `Edit` are highly suspicious. If the skill only needs to read images for vision, then `Read` might be justified, but `Write`/`Edit` are not. | LLM | Manifest | |
| HIGH | Potential data exfiltration through arbitrary file reading The `encode_image` function, demonstrated in the 'Vision (Image Input)' section, reads the content of a file specified by `path` using `Path(path).read_bytes()` and then base64 encodes it. This encoded content is subsequently sent to the Anthropic API. Given the `Read` permission declared in the manifest, a malicious prompt could instruct the agent to read sensitive files (e.g., `/etc/passwd`, `.env` files, SSH keys) from the filesystem and exfiltrate their contents to the Anthropic API, which is an external service. Implement strict validation and sanitization for any file paths provided to `encode_image`. Restrict file access to a specific, non-sensitive directory or require explicit user confirmation for reading files outside a designated safe zone. Consider if the agent truly needs to read arbitrary files or if image paths should be pre-defined or strictly controlled. | LLM | SKILL.md:121 |
Scan History
Embed Code
[](https://skillshield.io/report/0f4e2eedaf294f5e)
Powered by SkillShield