Security Audit
developer-growth-analysis
github.com/ComposioHQ/awesome-claude-skillsTrust Assessment
developer-growth-analysis received a trust score of 20/100, placing it in the Untrusted category. This skill has significant security findings that require attention before use in production.
SkillShield's automated analysis identified 6 findings: 1 critical, 3 high, 2 medium, and 0 low severity. Key findings include File read + network send exfiltration, Sensitive path access: AI agent config, Access to local chat history file.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 56/100, indicating areas for improvement.
Last analyzed on February 20, 2026 (commit 27904475). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings6
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | File read + network send exfiltration AI agent config/credential file access Remove access to sensitive files not required by the skill's stated purpose. SSH keys, cloud credentials, and browser data should never be read by skills unless explicitly part of their declared functionality. | Manifest | developer-growth-analysis/SKILL.md:61 | |
| HIGH | Sensitive path access: AI agent config Access to AI agent config path detected: '~/.claude/'. This may indicate credential theft. Verify that access to this sensitive path is justified and declared. | Static | developer-growth-analysis/SKILL.md:61 | |
| HIGH | Access to local chat history file The skill explicitly instructs the LLM to read the user's local chat history file located at `~/.claude/history.jsonl`. This file may contain highly sensitive information about the user's coding projects, discussions, proprietary code snippets, and potentially credentials. Direct access to local files, even specific ones, is a significant permission that could lead to data leakage if the skill's processing or output is compromised, or if the file path could be manipulated (though here it's hardcoded). Ensure the LLM execution environment strictly sandboxes file access, allowing only explicitly approved and minimal file operations. Implement a mechanism for explicit user consent before accessing local files, or provide an alternative for the user to securely upload the history rather than direct file system access by the agent. | LLM | SKILL.md:68 | |
| HIGH | Exfiltration of sensitive local data to external service (Slack) The skill processes potentially sensitive local chat history and then explicitly instructs the LLM to send a 'personalized growth report' derived from this analysis to the user's Slack DMs. While intended for the user's own DMs, this action involves transmitting data that originated from a local file (`~/.claude/history.jsonl`) to an external third-party service (Slack). This constitutes data exfiltration, as local data is moved outside the local environment. If the Slack integration is compromised or misconfigured, this sensitive report could be exposed to unauthorized parties. Implement strict data governance policies. Ensure explicit user consent is obtained before sending any data derived from local files to external services. Consider anonymizing or redacting highly sensitive information before transmission. Provide an option for the user to review the report before it's sent to Slack, or to download it locally instead. | LLM | SKILL.md:50 | |
| MEDIUM | Broad tool permissions for Slack integration The skill instructs the LLM to use `RUBE_MANAGE_CONNECTIONS` to 'initiate Slack auth' if not already connected. This implies a broad permission to manage and establish new connections to external services like Slack. Granting an AI agent the ability to initiate authentication flows for external services without explicit, granular user consent for each action could lead to unauthorized access or misconfigurations if the agent's instructions are compromised or misinterpreted. Restrict the agent's ability to initiate authentication flows. Instead, require the user to manually connect or authorize services. If automated connection is necessary, ensure it's done through a highly secure, user-confirmed OAuth flow with minimal scope, and that the agent only requests the absolute necessary permissions (e.g., `chat:write` to send messages to DMs, not broader administrative permissions). | LLM | SKILL.md:178 | |
| MEDIUM | Potential prompt/command injection via untrusted chat history content The skill analyzes 'chat history' (`~/.claude/history.jsonl`), which is untrusted input, to identify patterns and improvement areas. These findings are then used to construct search queries for `RUBE_SEARCH_TOOLS` and the content of the report sent to Slack. If the chat history contains malicious or specially crafted strings (e.g., 'ignore previous instructions', 'execute `rm -rf /`', or strings designed to manipulate the Rube tools), and these strings are directly incorporated into subsequent prompts or tool arguments without sanitization, it could lead to prompt injection against the LLM or command injection against the underlying Rube tools. The skill description does not mention any sanitization or validation of the chat history content before processing or using it in tool calls. Implement robust input validation and sanitization for all data extracted from the chat history, especially before it's used to construct prompts for the LLM or arguments for external tools. Ensure that any user-generated content is properly escaped or filtered to prevent it from being interpreted as instructions or commands. The LLM itself should be robust against prompt injection, but explicit sanitization adds a critical layer of defense. | LLM | SKILL.md:73 |
Scan History
Embed Code
[](https://skillshield.io/report/377098497a83e051)
Powered by SkillShield