Trust Assessment
canvas-lms received a trust score of 26/100, placing it in the Untrusted category. This skill has significant security findings that require attention before use in production.
SkillShield's automated analysis identified 3 findings: 2 critical, 1 high, 0 medium, and 0 low severity. Key findings include Arbitrary command execution, Remote code execution: curl/wget pipe to shell, Potential Command Injection via unsanitized URL parameters in `curl` examples.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on February 13, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings3
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Arbitrary command execution Remote code download piped to interpreter Review all shell execution calls. Ensure commands are static (not built from user input), use absolute paths, and are strictly necessary. Prefer library APIs over shell commands. | Manifest | skills/pranavkarthik10/canvas-lms/SKILL.md:96 | |
| CRITICAL | Remote code execution: curl/wget pipe to shell Detected a pattern that downloads and immediately executes remote code. This is a primary malware delivery vector. Never pipe curl/wget output directly to a shell interpreter. | Static | skills/pranavkarthik10/canvas-lms/SKILL.md:96 | |
| HIGH | Potential Command Injection via unsanitized URL parameters in `curl` examples The skill documentation provides `curl` command examples that include placeholders like `{course_id}` and `{id}` in the URL path. If an AI agent directly substitutes untrusted user input into these placeholders without proper sanitization (e.g., URL encoding or strict input validation), a malicious user could inject arbitrary shell commands. This could lead to remote code execution on the host system where the `curl` command is executed. Furthermore, since the `CANVAS_TOKEN` is used as an environment variable in these commands, a successful command injection could allow an attacker to read and exfiltrate this sensitive credential, leading to unauthorized access to the Canvas LMS API. AI agents implementing this skill must ensure that all user-provided or dynamically generated inputs used in `curl` command URLs (especially those replacing placeholders like `{course_id}` or `{id}`) are strictly validated and/or properly URL-encoded before execution. Consider using a dedicated HTTP client library in a safer language (e.g., Python `requests`) instead of direct shell `curl` commands to mitigate shell injection risks. | LLM | SKILL.md:50 |
Scan History
Embed Code
[](https://skillshield.io/report/439361c90d528428)
Powered by SkillShield