Trust Assessment
api-dev received a trust score of 88/100, placing it in the Mostly Trusted category. This skill has passed most security checks with only minor considerations noted.
SkillShield's automated analysis identified 3 findings: 0 critical, 0 high, 2 medium, and 1 low severity. Key findings include Example 'kill' command could lead to arbitrary process termination, Example 'curl' command demonstrates use of '$TOKEN' which could be exfiltrated if target URL is malicious, Python example scripts allow user-controlled network requests, posing SSRF and data exfiltration risks.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on February 13, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings3
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| MEDIUM | Example 'kill' command could lead to arbitrary process termination The skill provides an example of using `kill $(lsof -t -i :3000)` to terminate a process on a specific port. If the LLM's execution environment allows it to execute shell commands and a malicious user can influence the port number, this could lead to the termination of arbitrary processes on the host system. While presented as a debugging example, the direct execution of such a powerful command with potentially untrusted input poses a significant command injection risk. For the skill author: Add a prominent warning about the dangers of executing system commands with untrusted or unvalidated input. For the LLM execution environment: Implement strict sanitization or whitelisting for arguments passed to system commands like `kill`. Confirm user intent before executing such commands, especially when the target is user-controlled. | LLM | SKILL.md:242 | |
| MEDIUM | Python example scripts allow user-controlled network requests, posing SSRF and data exfiltration risks The `api_test.py` and `mock_server.py` example scripts take a base URL (`BASE`) or port (`PORT`) as command-line arguments (`sys.argv`). If the LLM generates and executes these scripts, and a malicious user can inject an arbitrary URL (e.g., `file:///etc/passwd`, `http://internal-service`, or an external malicious server) into these arguments, it could lead to Server-Side Request Forgery (SSRF), allowing access to internal resources, or data exfiltration to an attacker-controlled server. For the skill author: Add explicit warnings about validating user-provided network endpoints when generating and executing scripts. For the LLM execution environment: Implement strict validation and sanitization for all network-related arguments passed to generated scripts. This includes whitelisting allowed schemes, hosts, and ports, and preventing access to internal network ranges or local files. | LLM | SKILL.md:113 | |
| LOW | Example 'curl' command demonstrates use of '$TOKEN' which could be exfiltrated if target URL is malicious The skill provides `curl` examples that use an `$TOKEN` environment variable in the `Authorization` header. If the LLM's execution environment allows it to execute shell commands and a malicious user can influence the target URL, the `$TOKEN` (or any other sensitive data provided by the user) could be sent to an attacker-controlled server. While the example uses `api.example.com`, the pattern is a risk if the LLM generates similar commands with untrusted URLs. For the skill author: Add a note about the dangers of sending sensitive data to untrusted URLs. For the LLM execution environment: Implement URL validation/whitelisting for network requests, especially when sensitive data (like tokens) is involved. Confirm with the user before sending sensitive data to external endpoints. | LLM | SKILL.md:21 |
Scan History
Embed Code
[](https://skillshield.io/report/7e5b4ed5535593a9)
Powered by SkillShield