Trust Assessment
ds160-autofill received a trust score of 44/100, placing it in the Untrusted category. This skill has significant security findings that require attention before use in production.
SkillShield's automated analysis identified 6 findings: 1 critical, 1 high, 4 medium, and 0 low severity. Key findings include Unsafe deserialization / dynamic eval, Missing required field: name, LLM-generated JavaScript executed in browser.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 48/100, indicating areas for improvement.
Last analyzed on February 14, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings6
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | LLM-generated JavaScript executed in browser The skill explicitly instructs the agent to ask the LLM for an 'alternative selector' which can be 'CSS selector or JavaScript' when an element is not found. It then states 'Retry with LLM-suggested selector'. If the LLM provides malicious JavaScript, it will be executed within the browser context via the `browser_act` tool's `evaluate` capability, leading to potential command injection, cross-site scripting (XSS), or other browser-based attacks. 1. Strictly validate LLM output: Before executing any LLM-generated code, it must be thoroughly sanitized and validated to ensure it only contains safe CSS selectors or a very limited, pre-approved set of JavaScript operations. 2. Limit LLM's capability: Restrict the LLM to only generate CSS selectors, not arbitrary JavaScript. If JavaScript is absolutely necessary, use a sandboxed environment or a predefined set of safe functions. 3. Use a dedicated tool for element location: Instead of asking the LLM for raw JavaScript, use a tool that takes a natural language description and returns a safe selector or performs the action internally. | LLM | SKILL.md:196 | |
| HIGH | Sensitive page HTML sent to LLM The skill instructs the agent to send a 'Page HTML snippet' to the LLM when an element is not found. This HTML snippet can contain sensitive user data (Personally Identifiable Information - PII) from the DS-160 form. Sending this data to an external LLM service constitutes a data exfiltration risk, as the LLM provider may log or process this sensitive information. 1. Redact sensitive information: Before sending any page content to the LLM, thoroughly redact all PII and other sensitive data. 2. Limit context: Send only the absolute minimum necessary HTML context (e.g., the immediate vicinity of the missing element) rather than a full page snapshot. 3. Use local models or secure APIs: If sensitive data must be processed, use a local, on-premise LLM or an LLM API with strong data privacy guarantees and a clear understanding of their data retention policies. | LLM | SKILL.md:195 | |
| MEDIUM | Unsafe deserialization / dynamic eval Decryption followed by code execution Remove obfuscated code execution patterns. Legitimate code does not need base64-encoded payloads executed via eval, encrypted-then-executed blobs, or dynamic attribute resolution to call system functions. | Manifest | skills/clulessboy/ds160-autofill/scripts/ds160-filler.js:7 | |
| MEDIUM | Unsafe deserialization / dynamic eval Decryption followed by code execution Remove obfuscated code execution patterns. Legitimate code does not need base64-encoded payloads executed via eval, encrypted-then-executed blobs, or dynamic attribute resolution to call system functions. | Manifest | skills/clulessboy/ds160-autofill/scripts/ds160-filler.js:247 | |
| MEDIUM | Missing required field: name The 'name' field is required for claude_code skills but is missing from frontmatter. Add a 'name' field to the SKILL.md frontmatter. | Static | skills/clulessboy/ds160-autofill/SKILL.md:1 | |
| MEDIUM | Unpinned external dependency `js-yaml` The `scripts/ds160-filler.js` file imports the `js-yaml` library. Without a `package.json` or `package-lock.json` file, there is no mechanism to ensure that a specific, known-good version of `js-yaml` is used. This exposes the skill to potential supply chain attacks if a malicious version of `js-yaml` is published or if a critical vulnerability is found in a later version that is automatically pulled. 1. Add `package.json`: Include a `package.json` file that specifies exact versions for all dependencies (e.g., `js-yaml: "3.14.1"`). 2. Use `package-lock.json`: Generate and commit a `package-lock.json` file to ensure deterministic dependency installation. 3. Regularly audit dependencies: Use tools like `npm audit` to check for known vulnerabilities in dependencies. | LLM | scripts/ds160-filler.js:6 |
Scan History
Embed Code
[](https://skillshield.io/report/f84a841858271e37)
Powered by SkillShield