Security Audit
claude-dev-suite/claude-dev-suite:skills/api-integration/graphql-codegen
github.com/claude-dev-suite/claude-dev-suiteTrust Assessment
claude-dev-suite/claude-dev-suite:skills/api-integration/graphql-codegen received a trust score of 65/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 3 findings: 1 critical, 2 high, 0 medium, and 0 low severity. Key findings include Potential Credential Harvesting via getToken() example, Potential Data Exfiltration via process.env access, Excessive Permissions (Bash, Write, Edit) leading to Command Injection Risk.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 40/100, indicating areas for improvement.
Last analyzed on March 16, 2026 (commit 8c8434ef). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings3
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Excessive Permissions (Bash, Write, Edit) leading to Command Injection Risk The skill declares highly privileged `Bash`, `Write`, and `Edit` permissions in its manifest. The `SKILL.md` documentation further illustrates the use of `bash` commands (e.g., `npm install`, `npx graphql-codegen`, CI/CD scripts). This combination creates a severe command injection risk, as a malicious prompt could instruct the agent to execute arbitrary shell commands, modify files, or exfiltrate data, potentially leading to system compromise or unauthorized actions. Strictly re-evaluate and minimize the declared permissions. If `Bash` is absolutely necessary, implement rigorous input sanitization and validation for any user-controlled input passed to shell commands. Limit `Write` and `Edit` scope to specific, non-sensitive directories. | LLM | Manifest:1 | |
| HIGH | Potential Credential Harvesting via getToken() example The skill's documentation provides an example `fetcher.ts` that includes `Authorization: `Bearer ${getToken()}`. This pattern demonstrates how sensitive authentication tokens would be handled. Given the skill's declared `Bash` and `Read` permissions, a malicious prompt could instruct the agent to implement `getToken()` in an insecure way (e.g., reading from an insecure source) or to log/exfiltrate the retrieved token, leading to credential harvesting. Advise against showing patterns that handle credentials without strong warnings about secure implementation. If `getToken()` is meant to be implemented by the user, the skill should be designed to prevent exfiltration of its return value. | LLM | SKILL.md:135 | |
| HIGH | Potential Data Exfiltration via process.env access The skill's documentation shows a `codegen.ts` configuration example that accesses environment variables for authorization: `Authorization: `Bearer ${process.env.GRAPHQL_TOKEN}`. Environment variables often contain sensitive information. With `Read` and `Bash` permissions, a malicious prompt could instruct the agent to read and exfiltrate the values of these environment variables, leading to data exfiltration. Advise against showing patterns that access environment variables without strong warnings about secure handling. The skill should be designed to prevent exfiltration of environment variable values. | LLM | SKILL.md:218 |
Scan History
Embed Code
[](https://skillshield.io/report/c33d4141507d3153)
Powered by SkillShield