Trust Assessment
linkedin-automator received a trust score of 65/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 10 findings: 3 critical, 7 high, 0 medium, and 0 low severity. Key findings include Untrusted content directly embedded in LLM instructions, Untrusted content directly embedded in LLM instructions and cron payload, Untrusted image path embedded in LLM instructions.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 0/100, indicating areas for improvement.
Last analyzed on February 14, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings10
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Untrusted content directly embedded in LLM instructions The `CONTENT` variable, which comes from untrusted user input, is directly interpolated into the instructions provided to the LLM. This allows an attacker to inject malicious instructions into the LLM's prompt, potentially leading to arbitrary actions within the browser or other tools. Sanitize or escape user-provided content before embedding it into LLM instructions. Ideally, pass content as a structured parameter to the LLM rather than inline text. If inline, instruct the LLM to treat the content as literal text only. | LLM | scripts/post.sh:30 | |
| CRITICAL | Untrusted content directly embedded in LLM instructions and cron payload The `CONTENT` variable, from untrusted user input, is directly interpolated into the instructions for the LLM and also into the `payload.text` field of the cron job JSON. This creates two critical prompt injection vectors, allowing an attacker to manipulate the LLM's behavior directly or via a scheduled task. Sanitize or escape user-provided content before embedding it into LLM instructions or structured payloads. For cron payloads, ensure the LLM interprets the `text` field as literal content, not instructions. | LLM | scripts/schedule.sh:30 | |
| CRITICAL | Untrusted content directly embedded in LLM instructions and cron payload The `CONTENT` variable, from untrusted user input, is directly interpolated into the instructions for the LLM and also into the `payload.text` field of the cron job JSON. This creates two critical prompt injection vectors, allowing an attacker to manipulate the LLM's behavior directly or via a scheduled task. Sanitize or escape user-provided content before embedding it into LLM instructions or structured payloads. For cron payloads, ensure the LLM interprets the `text` field as literal content, not instructions. | LLM | scripts/schedule.sh:42 | |
| HIGH | Untrusted image path embedded in LLM instructions The `IMAGE` variable, which can contain an untrusted file path, is directly interpolated into the instructions for the LLM. An attacker could craft the image path to include malicious instructions for the LLM, potentially leading to arbitrary actions. Sanitize or escape user-provided file paths before embedding them into LLM instructions. Instruct the LLM to treat file paths as literal paths only. | LLM | scripts/post.sh:34 | |
| HIGH | Untrusted image path embedded in LLM instructions The `IMAGE` variable, which can contain an untrusted file path, is directly interpolated into the instructions for the LLM. An attacker could craft the image path to include malicious instructions for the LLM, potentially leading to arbitrary actions. Sanitize or escape user-provided file paths before embedding them into LLM instructions. Instruct the LLM to treat file paths as literal paths only. | LLM | scripts/schedule.sh:32 | |
| HIGH | Untrusted input variable interpolated into LLM instructions The `DAYS` variable, from untrusted user input, is directly interpolated into the descriptive text provided to the LLM. A sophisticated attacker could craft this input to include instructions that manipulate the LLM's behavior. Sanitize or escape user-provided inputs before embedding them into LLM instructions. Instruct the LLM to treat such interpolated values as literal data, not commands or instructions. | LLM | scripts/analytics.sh:9 | |
| HIGH | Untrusted input variable interpolated into LLM instructions The `LIMIT` variable, from untrusted user input, is directly interpolated into the descriptive text provided to the LLM. A sophisticated attacker could craft this input to include instructions that manipulate the LLM's behavior. Sanitize or escape user-provided inputs before embedding them into LLM instructions. Instruct the LLM to treat such interpolated values as literal data, not commands or instructions. | LLM | scripts/engage.sh:9 | |
| HIGH | Untrusted input variable interpolated into LLM instructions The `TOPIC` variable, from untrusted user input, is directly interpolated into the descriptive text provided to the LLM. A sophisticated attacker could craft this input to include instructions that manipulate the LLM's behavior. Sanitize or escape user-provided inputs before embedding them into LLM instructions. Instruct the LLM to treat such interpolated values as literal data, not commands or instructions. | LLM | scripts/engage.sh:15 | |
| HIGH | Untrusted input variable interpolated into LLM instructions The `TOPIC` variable, from untrusted user input, is directly interpolated into the descriptive text provided to the LLM. A sophisticated attacker could craft this input to include instructions that manipulate the LLM's behavior. Sanitize or escape user-provided inputs before embedding them into LLM instructions. Instruct the LLM to treat such interpolated values as literal data, not commands or instructions. | LLM | scripts/ideas.sh:9 | |
| HIGH | Untrusted input variable interpolated into LLM instructions The `TOPIC` variable, from untrusted user input, is directly interpolated into the descriptive text provided to the LLM. A sophisticated attacker could craft this input to include instructions that manipulate the LLM's behavior. Sanitize or escape user-provided inputs before embedding them into LLM instructions. Instruct the LLM to treat such interpolated values as literal data, not commands or instructions. | LLM | scripts/ideas.sh:39 |
Scan History
Embed Code
[](https://skillshield.io/report/d3a953389fc682ba)
Powered by SkillShield