Trust Assessment
little-snitch received a trust score of 65/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 3 findings: 1 critical, 2 high, 0 medium, and 0 low severity. Key findings include Skill requires root privileges for core functionality, Skill can export sensitive system configuration as root, Potential for command injection due to `sudo` and unsanitized arguments.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 40/100, indicating areas for improvement.
Last analyzed on February 12, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings3
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Skill requires root privileges for core functionality The `little-snitch` skill, as described in the documentation, requires `sudo` (root access) for many of its core functionalities, including activating/deactivating profiles, managing rule groups, logging traffic, and exporting/restoring the data model. Granting an AI agent the ability to execute commands as root poses a severe security risk, as it could lead to full system compromise if misused or exploited. The `SKILL.md` itself includes a warning about the power and potential misuse of the `littlesnitch` command. Re-evaluate the necessity of root access for all operations. If root access is unavoidable, implement strict input validation and sanitization for all arguments passed to `littlesnitch` commands to prevent command injection. Consider using a more granular privilege escalation mechanism if available, or restrict the agent's ability to execute arbitrary commands with `sudo`. | LLM | SKILL.md:18 | |
| HIGH | Skill can export sensitive system configuration as root The `littlesnitch export-model` command, which requires root privileges, can export the entire Little Snitch configuration data model. This model likely contains sensitive information about network rules, application behaviors, and user preferences. If an AI agent is prompted to execute this command and then transmit the resulting `backup.json` file, it could lead to the exfiltration of critical system configuration data. Implement strict controls on the agent's ability to execute `export-model` and to access or transmit files created by the skill. Ensure that any exported data is handled securely and is not accessible to unauthorized parties. Consider redacting sensitive information from the exported model if possible, or limiting the scope of what can be exported. | LLM | SKILL.md:50 | |
| HIGH | Potential for command injection due to `sudo` and unsanitized arguments The skill involves executing `littlesnitch` commands, many of which require `sudo`. If the arguments to these commands (e.g., profile names, rule group names, dates) are derived from untrusted user input without proper sanitization, an attacker could inject shell metacharacters or malicious commands. Since these commands are executed with root privileges, a successful injection could lead to arbitrary code execution and full system compromise. The `SKILL.md` itself warns about the power of the `littlesnitch` command. Implement robust input validation and sanitization for all arguments passed to `littlesnitch` commands, especially those executed with `sudo`. Ensure that user-provided strings are properly escaped or quoted to prevent shell metacharacter interpretation. Avoid directly concatenating user input into shell commands. | LLM | SKILL.md:30 |
Scan History
Embed Code
[](https://skillshield.io/report/14f5647624a095c3)
Powered by SkillShield