Trust Assessment
meatmarket received a trust score of 76/100, placing it in the Mostly Trusted category. This skill has passed most security checks with only minor considerations noted.
SkillShield's automated analysis identified 3 findings: 0 critical, 1 high, 1 medium, and 1 low severity. Key findings include Unsafe deserialization / dynamic eval, Node lockfile missing, Unsanitized user-controlled data in agent-facing output.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on February 14, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings3
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| HIGH | Unsanitized user-controlled data in agent-facing output The `examples/poll.js` script, which is explicitly suggested for AI agent integration via `stdout` capture, prints user-controlled data directly to the console. Specifically, fields like `row.title`, `row.human_name`, `row.proof_description`, `row.proof_image_url`, and `row.proof_link_url` are embedded in `console.log` statements without sanitization. If an AI agent directly feeds this raw `stdout` into its Large Language Model (LLM) without prior sanitization, a malicious human could craft inputs (e.g., a job title or proof description) containing prompt injection instructions. This could manipulate the agent's LLM, leading to unintended actions, disclosure of sensitive information (data exfiltration), or other security breaches. Modify `examples/poll.js` to output structured JSON (as suggested in the script's integration options) instead of human-readable text. If text output is necessary, ensure all user-controlled strings are rigorously sanitized (e.g., by escaping special characters or removing LLM-specific delimiters) before being printed to `stdout`. Additionally, any AI agent consuming this script's output must implement robust input sanitization before feeding it to its LLM. | LLM | examples/poll.js:68 | |
| MEDIUM | Unsafe deserialization / dynamic eval Decryption followed by code execution Remove obfuscated code execution patterns. Legitimate code does not need base64-encoded payloads executed via eval, encrypted-then-executed blobs, or dynamic attribute resolution to call system functions. | Manifest | skills/nickjuntilla/meatmarket/examples/poll.js:118 | |
| LOW | Node lockfile missing package.json is present but no lockfile was found (package-lock.json, pnpm-lock.yaml, or yarn.lock). Commit a lockfile for deterministic dependency resolution. | Dependencies | skills/nickjuntilla/meatmarket/package.json |
Scan History
Embed Code
[](https://skillshield.io/report/cd5b3a758ac2f11c)
Powered by SkillShield