Trust Assessment
warden-app received a trust score of 82/100, placing it in the Mostly Trusted category. This skill has passed most security checks with only minor considerations noted.
SkillShield's automated analysis identified 2 findings: 0 critical, 1 high, 1 medium, and 0 low severity. Key findings include Unvalidated user-provided URL for browser automation, Agent instructed to create and potentially execute unvalidated scripts.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on February 13, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings2
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| HIGH | Unvalidated user-provided URL for browser automation The skill instructs the agent to 'Open the Warden App URL (user-provided)'. Navigating to an arbitrary, unvalidated URL provided by a user can expose the browser automation environment to malicious websites. This could lead to phishing, cross-site scripting (XSS) attacks, or attempts to exploit browser vulnerabilities, potentially compromising the agent's environment or exfiltrating data. While the skill is a rubric, this is an explicit instruction for the agent to perform a high-risk action with untrusted input. Implement strict URL validation (e.g., an allowlist of trusted domains, URL sanitization) before navigating. Ensure the browser automation environment is sandboxed and isolated from the agent's core environment and sensitive data. | LLM | SKILL.md:30 | |
| MEDIUM | Agent instructed to create and potentially execute unvalidated scripts The skill instructs the agent to 'Create small deterministic scripts only when they reduce errors'. While the intent is for 'deterministic scripts' for specific tasks like parsing, the act of an LLM generating and potentially executing code based on its understanding (which could be influenced by untrusted input) introduces a risk of command injection or arbitrary code execution if not properly sandboxed and validated. The skill does not specify safeguards for the generated scripts. Explicitly state that any generated scripts must be executed within a strictly sandboxed environment, undergo security review, and be restricted from performing system calls, accessing local files, or making network requests outside of approved channels. Consider using a predefined set of safe functions or templates for script generation. | LLM | SKILL.md:63 |
Scan History
Embed Code
[](https://skillshield.io/report/6dbfffdededc9b8c)
Powered by SkillShield