Trust Assessment
morning-briefing received a trust score of 70/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 3 findings: 0 critical, 2 high, 1 medium, and 0 low severity. Key findings include Unpinned npm package dependency, Potential Command Injection via CLI arguments, Sensitive data access and potential exfiltration.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 63/100, indicating areas for improvement.
Last analyzed on February 13, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings3
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| HIGH | Unpinned npm package dependency The skill's manifest specifies an npm package (`@openclaw-tools/morning-briefing`) for installation without a specific version. This means that future installations could pull a different, potentially malicious or vulnerable, version of the package, leading to supply chain compromise. An attacker could publish a malicious update to this package, which would then be automatically installed. Pin the npm package to a specific, known-good version (e.g., `"package": "@openclaw-tools/morning-briefing@1.2.3"`) to ensure deterministic and secure installations. | LLM | SKILL.md:1 | |
| HIGH | Potential Command Injection via CLI arguments The skill relies on executing the `briefing` CLI tool with arguments that can be influenced by user input (e.g., `--location "New York"`, `briefing activate <license-key>`). If the LLM constructs these commands without robust sanitization of user-provided strings, a malicious user could inject arbitrary shell commands (e.g., `briefing --location "New York; rm -rf /"`), leading to command injection. The `briefing` tool also accesses sensitive local data (Calendar, Reminders) and runs with user permissions, increasing the impact of a successful injection. Ensure all user-provided input passed to the `briefing` CLI is strictly validated and sanitized to prevent shell metacharacters or command separators from being interpreted as executable code. Consider using a safe command execution library that properly escapes arguments. | LLM | SKILL.md:15 | |
| MEDIUM | Sensitive data access and potential exfiltration The `briefing` tool accesses sensitive local user data such as calendar events, reminders, and location. The skill's documentation suggests that the output of `briefing` (which can be in JSON format) will be 'relayed' or processed by the LLM, as indicated by the cron job example: `"Run \`briefing\` and relay the output to me."`. This creates a risk that sensitive personal information could be exposed to the LLM's context or inadvertently exfiltrated if the LLM is compromised or misconfigured, especially if the output is not filtered. Implement strict data handling policies for the `briefing` tool's output. Only relay necessary information, redact sensitive fields, and ensure the LLM's context is not unnecessarily exposed to raw personal data. Consider using a tool wrapper that filters output before it reaches the LLM. | LLM | SKILL.md:19 |
Scan History
Embed Code
[](https://skillshield.io/report/626b0cb612c32fb2)
Powered by SkillShield