Trust Assessment
locu received a trust score of 90/100, placing it in the Trusted category. This skill has passed all critical security checks and demonstrates strong security practices.
SkillShield's automated analysis identified 2 findings: 0 critical, 0 high, 2 medium, and 0 low severity. Key findings include Skill documents shell commands for API interaction, API token exposed to shell environment in example commands.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on February 14, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings2
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| MEDIUM | Skill documents shell commands for API interaction The `SKILL.md` file provides `curl` commands as examples for interacting with the Locu API. If the AI agent directly executes these shell commands, or generates code that executes them, there is a risk of command injection if user-provided input is not properly sanitized before being included in the command string. While the provided examples are static, the general approach of using shell commands for API calls can introduce vulnerabilities if not handled with extreme care by the agent's execution environment. Agents should avoid direct shell execution of commands constructed with user input. Instead, use dedicated HTTP client libraries (e.g., `requests` in Python) with proper parameterization to prevent command injection. If shell execution is unavoidable, ensure all user-provided input is strictly validated and sanitized, or passed as arguments to a subprocess call in a way that prevents shell interpretation. | LLM | skills/davidsmorais/locu/SKILL.md:13 | |
| MEDIUM | API token exposed to shell environment in example commands The `SKILL.md` demonstrates the use of `$LOCU_API_TOKEN` directly within `curl` shell commands. If the AI agent's execution environment allows arbitrary shell command execution, a malicious prompt could potentially instruct the agent to print or transmit the value of this environment variable, leading to data exfiltration of the API token. Agents should be designed to access credentials securely, ideally through a secrets management system or dedicated API, rather than relying on environment variables directly accessible to shell commands. If environment variables are used, the execution environment must strictly limit shell access and prevent any commands that could reveal or transmit sensitive variables. | LLM | skills/davidsmorais/locu/SKILL.md:13 |
Scan History
Embed Code
[](https://skillshield.io/report/e8de4265f72290c8)
Powered by SkillShield