Trust Assessment
todozi received a trust score of 65/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 3 findings: 1 critical, 1 high, 0 medium, and 1 low severity. Key findings include Agent can create/update webhooks to arbitrary URLs, Agent can update arbitrary user preferences, User-provided text fields may enable downstream prompt injection.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 53/100, indicating areas for improvement.
Last analyzed on February 12, 2026 (commit 9c1b8e80). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings3
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Agent can create/update webhooks to arbitrary URLs The `todozi_create_webhook` and `todozi_update_webhook` tools allow the AI agent to specify an arbitrary `url` parameter. An attacker could craft a prompt to instruct the agent to create or update a webhook pointing to an attacker-controlled server. This would enable the exfiltration of data (e.g., `item.created`, `item.completed` events) from the Todozi service to the attacker's endpoint, bypassing typical security controls. This grants the agent excessive control over external communication, posing a significant data exfiltration risk. Implement strict validation or an allowlist for webhook URLs. Alternatively, require explicit human confirmation for any webhook creation or update, especially when the URL is external or not pre-approved. Consider if the agent truly needs the capability to create/update webhooks to arbitrary destinations. | LLM | scripts/todozi.py:502 | |
| HIGH | Agent can update arbitrary user preferences The `todozi_update_user_preferences` tool accepts a `preferences: Dict[str, Any]` parameter, allowing the AI agent to modify any user preference key-value pair. An attacker could potentially manipulate user settings, leading to altered application behavior, denial of service, or indirect data leakage if certain preferences control sensitive outputs or integrations. The broad nature of the `Dict[str, Any]` input grants excessive permissions to the agent, making it a high-risk vector for unauthorized configuration changes. Restrict the `preferences` dictionary to a predefined set of allowed keys and value types. Implement strict validation for each preference to ensure it's a legitimate and safe setting. Consider if the agent needs to modify all user preferences or only a specific, limited subset. | LLM | scripts/todozi.py:485 | |
| LOW | User-provided text fields may enable downstream prompt injection Tools like `todozi_create_task` (via `title`, `description`), `todozi_search` (via `query`), and `todozi_create_note` (via `content`) accept free-form text input from the user or the LLM. If the Todozi service or any other downstream system processes these text fields using another LLM, an attacker could embed malicious instructions within the text, leading to prompt injection in those downstream systems. While this skill itself is an API client and not directly vulnerable, it acts as a conduit for potentially malicious input. Implement input sanitization or validation for these text fields, especially if they are known to be processed by other LLMs. Educate users about the risks of embedding instructions in free-form text. The ultimate responsibility for handling such input safely lies with the downstream systems that process it. | LLM | scripts/todozi.py:407 |
Scan History
Embed Code
[](https://skillshield.io/report/03bca5f7b44a7d23)
Powered by SkillShield