Trust Assessment
fieldy received a trust score of 19/100, placing it in the Untrusted category. This skill has significant security findings that require attention before use in production.
SkillShield's automated analysis identified 6 findings: 1 critical, 3 high, 1 medium, and 1 low severity. Key findings include Network egress to untrusted endpoints, Sensitive path access: AI agent config, Node lockfile missing.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 63/100, indicating areas for improvement.
Last analyzed on February 14, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings6
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Network egress to untrusted endpoints HTTP request to raw IP address Review all outbound network calls. Remove connections to webhook collectors, paste sites, and raw IP addresses. Legitimate API calls should use well-known service domains. | Manifest | skills/mrzilvis/fieldy-ai-webhook/SKILL.md:73 | |
| HIGH | Sensitive path access: AI agent config Access to AI agent config path detected: '~/.clawdbot/'. This may indicate credential theft. Verify that access to this sensitive path is justified and declared. | Static | skills/mrzilvis/fieldy-ai-webhook/SKILL.md:25 | |
| HIGH | Path Traversal in Log File Creation The `fieldy-webhook.js` script constructs a log file path using `payloadDateStr` from the incoming webhook `inputData`. An attacker can manipulate `payloadDateStr` (e.g., `{"date": "../../../tmp/malicious"}`) to inject path traversal sequences. This allows the script to write log entries to arbitrary locations on the filesystem, potentially overwriting critical files or exfiltrating data by writing to publicly accessible directories. The content written to the file (`text`, `speaker`) is also user-controlled, allowing for arbitrary content injection. Sanitize `payloadDateStr` to ensure `dateFilename` does not contain any path separators or special characters. A robust approach would be to use `path.basename(safeDateObj.toISOString().split("T")[0])` or a strict regex validation for the date format before using it in `path.join`. Ensure the `logDirInside` is strictly controlled and not user-modifiable. | LLM | src/fieldy-webhook.js:66 | |
| HIGH | Prompt Injection via User-Controlled Message The `commandText` variable, derived directly from the user's `transcription` in the incoming webhook, is used as the `message` field when triggering the agent. This allows an attacker to inject arbitrary instructions or malicious prompts into the LLM, potentially leading to unintended actions, data exposure, or manipulation of the agent's behavior. Implement robust input validation and sanitization for `commandText` before it is passed to the LLM. Consider using a separate system prompt to define the agent's behavior and strictly limit the user's input to data, not instructions. Techniques like instruction filtering or using a dedicated LLM guardrail can mitigate this risk. | LLM | src/fieldy-webhook.js:90 | |
| MEDIUM | Broad File System Access The `fieldy-webhook.js` script utilizes the `fs` module for various file system operations, including checking directory existence (`fs.existsSync`), creating directories (`fs.mkdirSync`), and appending to files (`fs.appendFileSync`). While some operations are for legitimate logging, the broad access to the file system, especially when combined with the potential for path traversal (as identified in another finding), poses a significant risk. The `SKILL.md` also suggests placing the script in `/root/clawd/skills/fieldy/scripts`, implying it might run with elevated privileges or in a sensitive system location. Adhere to the principle of least privilege. Restrict the script's execution environment to only have access to the directories and files absolutely necessary for its operation. Avoid running the script with root or highly privileged user accounts. Implement strict input validation for any paths or filenames derived from user input. | LLM | src/fieldy-webhook.js:1 | |
| LOW | Node lockfile missing package.json is present but no lockfile was found (package-lock.json, pnpm-lock.yaml, or yarn.lock). Commit a lockfile for deterministic dependency resolution. | Dependencies | skills/mrzilvis/fieldy-ai-webhook/package.json |
Scan History
Embed Code
[](https://skillshield.io/report/bbe9dde548319c0f)
Powered by SkillShield