Trust Assessment
post-at received a trust score of 88/100, placing it in the Mostly Trusted category. This skill has passed most security checks with only minor considerations noted.
SkillShield's automated analysis identified 1 finding: 0 critical, 1 high, 0 medium, and 0 low severity. Key findings include Potential Command Injection via unsanitized arguments.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on February 14, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings1
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| HIGH | Potential Command Injection via unsanitized arguments The skill documentation demonstrates calling the `post-at` CLI tool with arguments, including a `--description` argument that is likely to accept user-provided text. If the LLM constructs the shell command by directly interpolating untrusted user input into these arguments without proper shell escaping, an attacker could inject shell metacharacters (e.g., `;`, `&`, `|`, `$(...)`) to execute arbitrary commands on the host system. This is a common vulnerability pattern when LLMs interact with external tools via shell commands. The LLM orchestrating this skill must ensure all user-provided arguments passed to the `post-at` command are properly shell-escaped before execution. Implement robust input sanitization and shell escaping mechanisms (e.g., using `shlex.quote` in Python or equivalent functions in other languages) to prevent malicious input from being interpreted as shell commands. | LLM | SKILL.md:67 |
Scan History
Embed Code
[](https://skillshield.io/report/d8e977a4dacca809)
Powered by SkillShield