Trust Assessment
todoist received a trust score of 86/100, placing it in the Mostly Trusted category. This skill has passed most security checks with only minor considerations noted.
SkillShield's automated analysis identified 1 finding: 0 critical, 1 high, 0 medium, and 0 low severity. Key findings include Untrusted code snippet accesses and transmits API key.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on February 13, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings1
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| HIGH | Untrusted code snippet accesses and transmits API key The skill documentation, which is treated as untrusted input, contains Python code examples that directly access the `MATON_API_KEY` environment variable using `os.environ["MATON_API_KEY"]`. These examples then transmit the API key in an `Authorization` header to an external service (`https://gateway.maton.ai` or `https://ctrl.maton.ai`). If an LLM or agent were to execute these untrusted code snippets, it would expose the `MATON_API_KEY` to the untrusted skill, creating a credible path for credential harvesting or data exfiltration. While the target domain `maton.ai` is associated with the skill's author, the execution of untrusted code with direct access to sensitive environment variables is a significant security risk. 1. **Secure Execution Environment**: Ensure that any code snippets from untrusted skill documentation are executed in a sandboxed environment with strict limitations on network access and environment variable exposure. 2. **Credential Management**: Implement a secure mechanism for the LLM/agent to provide API keys to tools. Instead of directly executing untrusted code that reads `os.environ`, the LLM should inject the key into a trusted tool call or use a secure credential manager. 3. **Skill Design Review**: Re-evaluate the skill's design to avoid requiring direct `os.environ` access within untrusted documentation examples if these examples are intended for LLM execution. If they are purely for human reference, clarify this distinction. | LLM | SKILL.md:10 |
Scan History
Embed Code
[](https://skillshield.io/report/a77cbba0a92a2976)
Powered by SkillShield