Trust Assessment
imsg received a trust score of 65/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 2 findings: 2 critical, 0 high, 0 medium, and 0 low severity. Key findings include Skill requires excessive system permissions (Full Disk Access), Potential for arbitrary file exfiltration via `imsg send --file`.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 40/100, indicating areas for improvement.
Last analyzed on February 13, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings2
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Skill requires excessive system permissions (Full Disk Access) The `imsg` tool, as described in the skill, explicitly requires 'Full Disk Access for your terminal' and 'Automation permission to control Messages.app'. Granting 'Full Disk Access' to the terminal process that executes this tool allows the AI agent to read any file on the system, including sensitive user data, configuration files, and credentials. This level of access is highly excessive for a general-purpose skill and poses a critical security risk, enabling broad data exfiltration and system compromise if misused. Re-evaluate the necessity of 'Full Disk Access'. If possible, restrict file access to specific, sandboxed directories. For Messages.app automation, ensure strict input validation and user confirmation for sensitive actions. If broad access is unavoidable, ensure the LLM's execution environment is highly isolated and user interaction is mandatory for any sensitive operation. | LLM | SKILL.md:10 | |
| CRITICAL | Potential for arbitrary file exfiltration via `imsg send --file` The `imsg` tool supports sending files using the `imsg send --file /path/pic.jpg` command. When combined with the explicitly stated requirement for 'Full Disk Access for your terminal', this creates a critical data exfiltration vector. An AI agent, if prompted maliciously, could be instructed to send any file from the user's system (e.g., `/etc/passwd`, `~/.ssh/id_rsa`, sensitive documents) to an external recipient via iMessage/SMS. This functionality, coupled with broad permissions, allows for unauthorized disclosure of sensitive information. Implement strict validation and sanitization of file paths provided to the `imsg send --file` command. Consider whitelisting allowed directories or requiring explicit user confirmation for sending files, especially those outside a designated safe zone. If 'Full Disk Access' is truly unavoidable, ensure the LLM's execution environment is highly sandboxed and user interaction is mandatory for file operations. | LLM | SKILL.md:15 |
Scan History
Embed Code
[](https://skillshield.io/report/30ac9cb9b1ba0d7d)
Powered by SkillShield