Trust Assessment
circleci received a trust score of 88/100, placing it in the Mostly Trusted category. This skill has passed most security checks with only minor considerations noted.
SkillShield's automated analysis identified 1 finding: 0 critical, 1 high, 0 medium, and 0 low severity. Key findings include Potential Command Injection via unsanitized URL parameters in `curl` commands.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on February 13, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings1
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| HIGH | Potential Command Injection via unsanitized URL parameters in `curl` commands The skill description in `SKILL.md` provides `curl` command examples that include placeholders for user-controlled input such as `{org}`, `{repo}`, and `{workflowId}`. If the LLM directly interpolates untrusted user input into these `curl` commands without proper sanitization or shell escaping, an attacker could inject arbitrary shell commands. For example, providing `repo=myrepo; malicious_command` could lead to the execution of `malicious_command` on the host system if the LLM constructs and executes the command directly in a shell. The LLM implementation of this skill should use a robust HTTP client library (e.g., `requests` in Python) to construct API calls, which automatically handles URL encoding and prevents shell injection. If shell commands like `curl` must be used, all user-provided parameters must be strictly validated and shell-escaped before being interpolated into the command string. | LLM | SKILL.md:12 |
Scan History
Embed Code
[](https://skillshield.io/report/848e1fe8a0d9eeb9)
Powered by SkillShield