Trust Assessment
aifs received a trust score of 86/100, placing it in the Mostly Trusted category. This skill has passed most security checks with only minor considerations noted.
SkillShield's automated analysis identified 1 finding: 0 critical, 1 high, 0 medium, and 0 low severity. Key findings include Potential Command Injection via Shell Examples.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on February 13, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings1
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| HIGH | Potential Command Injection via Shell Examples The skill documentation provides `bash` examples that utilize shell variable interpolation and command substitution (`$KEY`, `$(date ...)`, `$(curl ... | jq ...)`) to construct API requests. If the LLM directly executes these `bash` snippets, or constructs similar commands by interpolating user-controlled input (including data retrieved from the AIFS service) into shell commands without proper sanitization, it could lead to arbitrary command injection. Specifically, the 'Append to log' example demonstrates reading content from the AIFS service into a shell variable (`EXISTING`) and then re-interpolating this potentially untrusted content into another `curl` command's data payload. If the content read from AIFS contains shell metacharacters, these could be executed in the shell environment. 1. **Avoid direct shell execution**: The LLM should use its internal HTTP client capabilities to interact with the AIFS API, rather than constructing and executing `curl` commands via a shell. This eliminates the shell injection surface. 2. **Strict input sanitization**: If shell execution is unavoidable, all user-controlled input (including file paths and content read from AIFS) must be rigorously sanitized and escaped to prevent shell metacharacters from being interpreted as commands. 3. **Parameterize commands**: Use parameterized command execution where arguments are passed separately from the command string, preventing shell interpretation. 4. **Clarify LLM execution model**: The skill documentation should explicitly state that these are *examples* for human understanding, and the LLM should use its secure internal mechanisms for API interaction. | LLM | SKILL.md:121 |
Scan History
Embed Code
[](https://skillshield.io/report/57490b189965babf)
Powered by SkillShield