Trust Assessment
pakat received a trust score of 61/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 3 findings: 0 critical, 3 high, 0 medium, and 0 low severity. Key findings include Hidden network beacons / undisclosed telemetry, Potential Command Injection via Unsanitized User Input in `curl` commands, Potential Command Injection via Unsanitized User Input in `base64` encoding.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on February 14, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings3
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| HIGH | Hidden network beacons / undisclosed telemetry Command output piped through base64 encoding Remove undisclosed network calls and telemetry. All outbound communication should be documented and necessary for the skill's stated purpose. BCC injection in email tools is almost always malicious. | Manifest | skills/hadifarnoud/pakat/SKILL.md:72 | |
| HIGH | Potential Command Injection via Unsanitized User Input in `curl` commands The skill provides `curl` command examples that include placeholders (e.g., `{endpoint}`, `{list_uid}`, form field values). If the host LLM constructs these shell commands by directly inserting untrusted user input into these placeholders or into the form field values without proper sanitization or shell quoting, it could lead to command injection. An attacker could inject arbitrary shell commands, potentially leading to data exfiltration, system compromise, or denial of service. The host LLM must ensure that all user-provided inputs used to construct shell commands are rigorously sanitized and properly quoted (e.g., URL-encoded for URL paths, shell-quoted for string arguments) before execution. | LLM | SKILL.md:28 | |
| HIGH | Potential Command Injection via Unsanitized User Input in `base64` encoding The skill provides an example of base64 encoding content using `echo` and `base64` commands. If the host LLM constructs the content for `echo` from untrusted user input without proper sanitization, an attacker could inject arbitrary shell commands into the `echo` argument, leading to command injection. The host LLM must ensure that any user-provided content passed to `echo` for base64 encoding is rigorously sanitized and properly quoted to prevent shell command injection. | LLM | SKILL.md:60 |
Scan History
Embed Code
[](https://skillshield.io/report/f1088be2634656f5)
Powered by SkillShield