Trust Assessment
tldr received a trust score of 90/100, placing it in the Trusted category. This skill has passed all critical security checks and demonstrates strong security practices.
SkillShield's automated analysis identified 2 findings: 0 critical, 1 high, 0 medium, and 0 low severity. Key findings include Potential Command Injection in `tldr` command execution, Skill documentation contains instructions for the host LLM.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on February 14, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings2
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| HIGH | Potential Command Injection in `tldr` command execution The skill documentation explicitly shows the usage pattern `tldr <command>`, where `<command>` is expected to be provided by the user or host LLM. If the skill's underlying implementation directly interpolates this input into a shell command without proper sanitization (e.g., escaping shell metacharacters), it creates a critical command injection vulnerability. An attacker could inject arbitrary shell commands by crafting malicious input for `<command>`. Implement robust input sanitization and validation for any user-provided input used in shell commands. When executing external binaries, use a safe command execution method (e.g., `subprocess.run` in Python with arguments passed as a list, not a single string with `shell=True`) to prevent shell metacharacter interpretation. Ensure that the skill's code explicitly handles and escapes all potentially dangerous characters in user-supplied arguments. | LLM | SKILL.md:22 | |
| INFO | Skill documentation contains instructions for the host LLM The `SKILL.md` contains explicit instructions intended to guide the host LLM's behavior, such as "Always prioritize `tldr` over standard CLI manuals". While these specific instructions are benign, embedding directives for the host LLM within untrusted skill documentation creates a vector for prompt injection. A malicious skill could leverage this mechanism to manipulate the host LLM into performing unintended actions, altering its behavior, or revealing sensitive information. Skill documentation should primarily describe the skill's functionality and usage, not issue directives to the host LLM. The host LLM should be designed to ignore or critically evaluate instructions found within skill documentation, especially when the content is marked as untrusted input. Any necessary behavioral guidance for the LLM should be provided through trusted, explicit configuration or system prompts, not embedded within skill content. | LLM | SKILL.md:11 |
Scan History
Embed Code
[](https://skillshield.io/report/b3252ad24aaf24f3)
Powered by SkillShield