Trust Assessment
mobb-vulnerabilities-fixer received a trust score of 65/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 2 findings: 1 critical, 1 high, 0 medium, and 0 low severity. Key findings include Unsanitized user input passed to external tool execution, Application of unvalidated patches from external source.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 55/100, indicating areas for improvement.
Last analyzed on February 13, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings2
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Unsanitized user input passed to external tool execution The skill instructs the LLM to invoke external tools (`scan_and_fix_vulnerabilities`, `fetch_available_fixes`, `check_for_new_available_fixes`) with user-controlled parameters such as `path`, `offset`, `limit`, `maxFiles`, `rescan`, `scanRecentlyChangedFiles`, and `fileFilter`. While the skill mentions 'Reject paths with traversal patterns' for the `path` parameter, it does not provide explicit sanitization instructions for any of these parameters before they are passed to a shell command. If the LLM constructs a shell command using these parameters without robust escaping or validation, a malicious user could inject arbitrary shell commands, leading to remote code execution. Implement robust input validation and shell escaping for all user-controlled parameters before constructing and executing any shell commands. Specifically, ensure `path` is canonicalized and checked for traversal, and all string parameters are properly quoted/escaped for the target shell. | LLM | SKILL.md:29 | |
| HIGH | Application of unvalidated patches from external source The skill instructs the LLM to 'Apply returned fixes exactly as provided' by the Mobb MCP tool, after user consent. While user consent is required, the instruction explicitly states 'modify nothing else', implying no further validation or sanitization of the patch content itself by the LLM. If the Mobb MCP tool or its output is compromised, or if a user can trick the LLM into applying a malicious patch, this could lead to arbitrary file modifications, introduction of backdoors, or data exfiltration by altering source code. The method of patch application (e.g., `patch` command, direct file manipulation) is not specified, but any method that applies unvalidated changes to the filesystem carries significant risk. Implement a mechanism to analyze and validate the content of patches before application. This could involve static analysis of the patch diff, restricting modifications to specific file types or locations, or requiring human review of the full patch content beyond a summary. Ensure the patch application mechanism itself is secure and does not allow for command injection. | LLM | SKILL.md:40 |
Scan History
Embed Code
[](https://skillshield.io/report/a5763d5b96a0a839)
Powered by SkillShield