Trust Assessment
rescuetime received a trust score of 90/100, placing it in the Trusted category. This skill has passed all critical security checks and demonstrates strong security practices.
SkillShield's automated analysis identified 1 finding: 0 critical, 1 high, 0 medium, and 0 low severity. Key findings include Potential Command Injection via unsanitized shell command generation.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on February 13, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings1
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| HIGH | Potential Command Injection via unsanitized shell command generation The skill documentation provides `curl` command examples that include shell command substitution (e.g., `$(date +%Y-%m-%d)`). If the AI agent generates and executes similar shell commands based on untrusted user input for parameters like `restrict_begin`, `restrict_end`, or `restrict_thing` without proper sanitization, it could lead to arbitrary command execution. This pattern is a common source of command injection vulnerabilities in LLM-powered agents. When generating code to interact with the RescueTime API, ensure that all parameters derived from user input are strictly validated and sanitized before being incorporated into shell commands. Prefer using a dedicated HTTP client library in a language like Python or JavaScript, which offers better parameter handling and avoids direct shell execution for API calls. If shell execution is unavoidable, use parameterized commands or escape user input thoroughly. | LLM | SKILL.md:51 |
Scan History
Embed Code
[](https://skillshield.io/report/242d9eb692960cd0)
Powered by SkillShield