Security Audit
sundial-org/awesome-openclaw-skills:skills/anylist
github.com/sundial-org/awesome-openclaw-skillsTrust Assessment
sundial-org/awesome-openclaw-skills:skills/anylist received a trust score of 35/100, placing it in the Untrusted category. This skill has significant security findings that require attention before use in production.
SkillShield's automated analysis identified 4 findings: 1 critical, 2 high, 1 medium, and 0 low severity. Key findings include Skill requires sensitive credentials in environment variables, Potential for command injection through unsanitized user input, Reliance on external `anylist-cli` binary introduces supply chain risk.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 33/100, indicating areas for improvement.
Last analyzed on March 3, 2026 (commit 6d998e00). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings4
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Skill requires sensitive credentials in environment variables The skill's setup instructions explicitly state that `ANYLIST_EMAIL` and `ANYLIST_PASSWORD` should be set as environment variables for non-interactive use. This makes sensitive user credentials accessible to the `anylist` binary and potentially to the AI agent's execution environment. An attacker could craft a prompt to instruct the agent to read and exfiltrate these environment variables, or a compromised `anylist` binary could steal them. Avoid storing sensitive credentials directly in environment variables accessible to the agent. Consider using a secure secrets management system or an OAuth flow that provides short-lived tokens, or ensure the `anylist` binary handles credentials securely without exposing them to the general environment. If environment variables are necessary, ensure the agent's execution environment is sandboxed and cannot access or exfiltrate arbitrary environment variables. | LLM | SKILL.md:18 | |
| HIGH | Potential for command injection through unsanitized user input The skill describes how user input (e.g., item names, list names, categories, quantities) is used to construct shell commands for the `anylist` binary. If the AI agent directly interpolates user-provided strings into these shell commands without proper sanitization or escaping, a malicious user could inject arbitrary shell commands. For example, an item name like `Milk" && rm -rf /` could lead to execution of `rm -rf /`. The `SKILL.md` provides examples of commands but does not specify how the agent should handle user input to prevent such injections. The AI agent's implementation must strictly sanitize and escape all user-provided input before incorporating it into shell commands. Use a robust shell escaping library or ensure that all arguments are passed as distinct parameters to the `anylist` binary, rather than concatenating them into a single shell string. | LLM | SKILL.md:30 | |
| HIGH | Reliance on external `anylist-cli` binary introduces supply chain risk The skill explicitly requires the `anylist-cli` package to be installed globally via `npm install -g anylist-cli`. This introduces a supply chain risk, as the integrity of the `anylist-cli` package and its dependencies is critical. A compromise of the `anylist-cli` package (e.g., malicious code injection, typosquatting, or a compromised maintainer) could lead to arbitrary code execution on the system where the skill is installed and run. The manifest also lists `anylist` as a required binary. Implement robust supply chain security practices, including: pinning exact versions of dependencies, using package integrity checks (e.g., `npm audit`, `checksums`), regularly auditing the `anylist-cli` package and its dependencies for vulnerabilities, and considering sandboxed execution environments for external binaries. | LLM | SKILL.md:11 | |
| MEDIUM | Skill grants full control over linked AnyList account The skill, through the `anylist` CLI, provides comprehensive capabilities to manage a user's AnyList account, including adding, removing, checking, unchecking items, and clearing entire lists. While this is the intended functionality, it means that if the AI agent is compromised (e.g., via prompt injection), it could be instructed to perform destructive actions on the user's shopping lists without explicit user confirmation for each action. Implement strict access controls and user confirmation mechanisms for sensitive or destructive actions. Consider breaking down the skill into more granular permissions if possible, or requiring explicit user approval for actions like 'clear all items' or 'remove list'. Ensure the agent's decision-making process is robust against malicious prompts. | LLM | SKILL.md:24 |
Scan History
Embed Code
[](https://skillshield.io/report/91659367283c483c)
Powered by SkillShield