Trust Assessment
linkedin-automator received a trust score of 74/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 2 findings: 0 critical, 2 high, 0 medium, and 0 low severity. Key findings include Potential Data Exfiltration via Image Upload, Prompt Injection Risk in Scheduled Content Payload.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on February 13, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings2
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| HIGH | Potential Data Exfiltration via Image Upload The `post.sh` and `schedule.sh` scripts allow specifying an arbitrary local file path for an image to be uploaded to LinkedIn via the `browser` tool. If the `browser` tool has broad filesystem access, a malicious user could instruct the LLM to upload sensitive files (e.g., `/etc/passwd`, `~/.ssh/id_rsa`) from the host system to LinkedIn or another web service. This capability, while intended for legitimate image uploads, poses a data exfiltration risk if the `browser` tool's permissions are not sufficiently restricted. Ensure the `browser` tool is strictly sandboxed and only allowed to access a limited, designated directory for file uploads. Implement explicit user confirmation for file uploads, especially for paths outside a designated 'upload' directory. Validate and sanitize file paths provided by the user/LLM before passing them to the browser tool. | LLM | scripts/post.sh:34 | |
| HIGH | Prompt Injection Risk in Scheduled Content Payload The `schedule.sh` script constructs a JSON payload for the `cron` tool where the user-provided `$CONTENT` is directly embedded into the `text` field of a `systemEvent`. If this `systemEvent`'s `text` is later processed by an LLM (e.g., for logging, summarization, or further action), a malicious `$CONTENT` containing prompt injection instructions could manipulate the downstream LLM's behavior. When passing user-controlled content to a `systemEvent` that might be processed by an LLM, ensure the content is properly sanitized or encapsulated to prevent prompt injection. Consider using a dedicated field for 'user_content' that is explicitly marked as untrusted, or implement a robust input sanitization/escaping mechanism before embedding it into LLM-facing prompts. | LLM | scripts/schedule.sh:40 |
Scan History
Embed Code
[](https://skillshield.io/report/4c14a842c3e64f64)
Powered by SkillShield