Security Audit
anysiteio/agent-skills:skills/anysite-cli
github.com/anysiteio/agent-skillsTrust Assessment
anysiteio/agent-skills:skills/anysite-cli received a trust score of 74/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 4 findings: 0 critical, 1 high, 2 medium, and 1 low severity. Key findings include Potential Command Injection via SQL Query, Potential Data Exfiltration via Webhook Export, Credential Exposure via CLI Arguments and API Key Retrieval.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 69/100, indicating areas for improvement.
Last analyzed on April 1, 2026 (commit 5cefedb0). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings4
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| HIGH | Potential Command Injection via SQL Query The skill describes the `anysite dataset query --sql` command which directly executes SQL queries provided as an argument. If an AI agent constructs this command using untrusted user input for the `--sql` parameter, it could lead to SQL injection vulnerabilities against the underlying DuckDB instance or connected external databases. Malicious SQL could read, modify, or delete data, or potentially execute arbitrary code depending on the database configuration and privileges. When constructing `anysite dataset query --sql` commands, ensure that any part of the SQL query derived from untrusted user input is properly sanitized or parameterized. Avoid direct concatenation of user input into SQL strings. If direct user input is necessary, consider using a allowlist of safe SQL constructs or a dedicated SQL parsing library to validate input. | LLM | SKILL.md:319 | |
| MEDIUM | Potential Data Exfiltration via Webhook Export The `dataset.yaml` configuration supports an `export` type of `webhook` which allows sending collected data to an arbitrary URL, including custom headers. While this is a legitimate feature, if an AI agent is prompted to create or modify a `dataset.yaml` based on untrusted user input, a malicious actor could specify an external URL to exfiltrate sensitive data collected by the agent. The skill itself documents this capability without inherent safeguards against malicious URLs. Implement strict validation and allowlisting for webhook URLs and headers if they can be influenced by untrusted user input. Consider restricting webhook destinations to a predefined set of trusted domains or requiring explicit user confirmation for new, untrusted destinations. Ensure sensitive data is not inadvertently included in webhook payloads unless explicitly intended and secured. | LLM | SKILL.md:210 | |
| MEDIUM | Credential Exposure via CLI Arguments and API Key Retrieval The skill describes commands like `anysite db add ... --password secret` where sensitive credentials (passwords) are passed directly as command-line arguments. This is insecure as passwords can be exposed in shell history, process lists (`ps aux`), or logs. Additionally, `anysite config get api_key` allows retrieving the configured API key. If an AI agent is instructed by untrusted input to execute this command and then transmit its output, it could lead to credential harvesting. Strongly recommend using environment variables for sensitive credentials (e.g., `--password-env PGPASS`) instead of direct command-line arguments, as already shown as an alternative in the skill. For `anysite config get api_key`, ensure the AI agent is programmed to never output or transmit the API key to untrusted channels or users. Implement strict access controls around commands that expose credentials. | LLM | SKILL.md:257 | |
| LOW | Excessive Permissions: Cron Scheduling Capability The skill describes the `anysite dataset schedule` command, which generates cron entries for automated task execution. This implies the AI agent has the capability to modify the system's cron table. While a legitimate feature, granting an AI agent the ability to schedule system-level tasks represents a significant privilege. If combined with command injection vulnerabilities, this could allow for persistent execution of malicious commands. Ensure the AI agent's execution environment is properly sandboxed and that its ability to interact with system-level scheduling mechanisms (like cron) is strictly controlled and monitored. If possible, restrict the agent's ability to write to cron tables directly, or require explicit human approval for any new scheduled tasks. Validate all parameters passed to scheduling commands to prevent injection. | LLM | SKILL.md:346 |
Scan History
Embed Code
[](https://skillshield.io/report/69d9eab16c9e1faf)
Powered by SkillShield