Security Audit
railway-environment
github.com/davila7/claude-code-templatesTrust Assessment
railway-environment received a trust score of 65/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 4 findings: 0 critical, 4 high, 0 medium, and 0 low severity. Key findings include Potential Shell Command Injection via CLI Arguments, Sensitive Environment Variables Exposed to LLM, Broad Bash Permissions for Railway CLI.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 40/100, indicating areas for improvement.
Last analyzed on February 20, 2026 (commit 458b1186). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings4
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| HIGH | Potential Shell Command Injection via CLI Arguments The skill executes `railway` CLI commands such as `railway environment new <name>`, `railway environment <name>`, and `railway variables --service <service-name>`. If the `<name>` or `<service-name>` arguments are constructed directly from untrusted user input without proper sanitization (e.g., escaping shell metacharacters), a malicious user could inject arbitrary shell commands. Ensure all user-provided arguments passed to `railway` CLI commands are properly sanitized and shell-escaped before execution. Use a robust escaping mechanism for shell arguments. | LLM | SKILL.md:48 | |
| HIGH | Sensitive Environment Variables Exposed to LLM The skill explicitly uses `railway variables --json` to retrieve "rendered (resolved) values" of environment variables, including potentially sensitive data like `DATABASE_URL`, `API_KEY`, and `RAILWAY_*` tokens. While this is an intended function of the skill, it means the LLM will have access to these secrets. This creates a high risk of data exfiltration if the LLM is prompted to reveal these variables to the user or an external service, or if the LLM's outputs are not carefully controlled. Implement strict output filtering and access controls for any LLM responses that might contain sensitive environment variables. Avoid displaying raw secrets to the user unless explicitly requested and confirmed. Consider redacting sensitive values by default. | LLM | SKILL.md:127 | |
| HIGH | Broad Bash Permissions for Railway CLI The skill declares `Bash(railway:*)` as an allowed tool. This grants the skill the ability to execute any command available via the `railway` CLI. This broad permission allows for potentially destructive actions (e.g., deleting services, projects, or environments) and full control over Railway resources, which could be exploited if the LLM is compromised or given malicious instructions. Review if `railway:*` is strictly necessary. If possible, narrow down the allowed `railway` commands to only those essential for the skill's intended functionality. Implement robust input validation and LLM guardrails to prevent unintended or malicious use of these broad permissions. | LLM | Manifest | |
| HIGH | Potential Injection via GraphQL Query or JSON Variables The skill uses `railway-api.sh` to execute GraphQL queries and mutations, passing the query string and JSON variables as arguments. If the LLM constructs these GraphQL query strings or JSON variable payloads by directly interpolating untrusted user input (e.g., for `commitMessage`, `input` fields, or parts of the GraphQL query itself), it could lead to GraphQL injection or JSON payload injection. This could result in unauthorized data access, modification, or even command execution if `railway-api.sh` or the underlying `railway api` command is vulnerable to such injections. The heredoc protects the shell script itself, but not the content *within* the GraphQL query or JSON. Ensure all user-provided input used to construct GraphQL queries or JSON variable payloads is strictly validated and properly escaped for both GraphQL and JSON contexts. Avoid direct string interpolation of untrusted input. | LLM | SKILL.md:108 |
Scan History
Embed Code
[](https://skillshield.io/report/7a3ff092990b4987)
Powered by SkillShield