Trust Assessment
pet received a trust score of 67/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 2 findings: 1 critical, 0 high, 1 medium, and 0 low severity. Key findings include Arbitrary Command Execution via `pet exec`, Potential Data Exfiltration/Credential Harvesting via Gist Sync.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 63/100, indicating areas for improvement.
Last analyzed on February 13, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings2
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Arbitrary Command Execution via `pet exec` The `pet` tool, which this skill wraps, is designed to store and execute user-defined command snippets. The `pet exec` command directly executes these stored snippets. If an LLM agent is prompted to create or execute a snippet containing malicious commands, or if an attacker can manipulate the `~/.config/pet/snippet.toml` file, arbitrary command execution can occur on the host system. This is a direct consequence of the tool's intended functionality, making the skill a high-risk vector for command injection if not used with extreme caution. The LLM agent must be programmed with extreme caution when interacting with `pet`. It should never create or execute snippets based on untrusted user input without rigorous sanitization and validation. Consider sandboxing the execution environment for `pet` commands to limit potential damage. Users should be warned about the inherent risk of executing arbitrary commands. | LLM | SKILL.md:24 | |
| MEDIUM | Potential Data Exfiltration/Credential Harvesting via Gist Sync The `pet sync` command allows synchronization of snippets with a GitHub Gist. If sensitive information (e.g., API keys, passwords, or confidential data) is stored within `pet` snippets, or if the GitHub Gist token configured in `~/.config/pet/config.toml` is compromised, this feature could lead to unauthorized data exfiltration or credential harvesting. The skill description does not provide details on how Gist credentials are stored or managed by the `pet` tool. Advise users to avoid storing sensitive information in `pet` snippets, especially if `pet sync` is enabled. Ensure that GitHub Gist tokens are stored securely (e.g., using environment variables or a secure credential manager) and have minimal necessary permissions. The LLM agent should be aware of the implications of syncing and avoid syncing sensitive data. | LLM | SKILL.md:30 |
Scan History
Embed Code
[](https://skillshield.io/report/3ae7955da9915ae9)
Powered by SkillShield