Trust Assessment
system-health-reporter received a trust score of 63/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 4 findings: 0 critical, 2 high, 2 medium, and 0 low severity. Key findings include Missing required field: name, Collection of User Login Activity (PII), Collection of Sensitive Process Arguments.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 63/100, indicating areas for improvement.
Last analyzed on February 12, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings4
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| HIGH | Collection of Sensitive Process Arguments The skill executes `ps aux`, which lists all running processes along with their full command-line arguments. These arguments can contain highly sensitive information such as API keys, passwords, environment variables, or file paths. Although the skill advises 'Don't include full process arguments in reports (may contain secrets)', the raw data is collected by the agent. This poses a significant risk of accidental exposure or exfiltration if the agent fails to properly sanitize or redact the output before processing, storing, or presenting it. Modify the `ps` commands to explicitly exclude or redact sensitive columns (e.g., the full `COMMAND` column) or implement robust, explicit sanitization of `ps aux` output to remove sensitive data before any further processing or reporting. For example, use `ps -eo pid,user,%cpu,%mem,vsz,rss,tty,stat,start,time,comm` to avoid capturing full command arguments. | LLM | SKILL.md:27 | |
| HIGH | Direct Shell Command Execution The skill explicitly instructs the AI agent to execute a series of shell commands directly on the host system (e.g., `uname`, `uptime`, `ps`, `df`, `systemctl`). While these commands are described as read-only and diagnostic, the capability to perform direct shell execution is a significant security risk. If the AI agent's execution environment is not strictly sandboxed and isolated, or if there's any unforeseen way for untrusted input to influence command execution (even if not directly into these hardcoded commands), this could lead to broader system compromise, unintended information disclosure, or resource exhaustion. This represents a direct instance of shell execution as defined under COMMAND INJECTION threats. Ensure the AI agent executes these commands within a highly restricted, ephemeral, and isolated sandbox environment with minimal privileges. Implement strict allow-listing for commands and arguments. Consider using a more controlled API for system diagnostics instead of raw shell commands if available, or parse command outputs carefully to prevent misinterpretation and ensure only intended data is extracted. | LLM | SKILL.md:20 | |
| MEDIUM | Missing required field: name The 'name' field is required for claude_code skills but is missing from frontmatter. Add a 'name' field to the SKILL.md frontmatter. | Static | skills/sa9saq/system-health-reporter/SKILL.md:1 | |
| MEDIUM | Collection of User Login Activity (PII) The skill executes `last` and `who` commands, which collect and may expose usernames and login activity. The skill's 'Security' section explicitly warns about this ('Login activity (`last`, `who`) may reveal usernames — consider audience before sharing.'). This constitutes a data exfiltration risk if the generated report is shared with an unauthorized audience or stored insecurely, as it involves the collection of personally identifiable information (PII). Redact or anonymize usernames and login times from the output of `last` and `who` commands before including them in reports, or make their inclusion optional and off by default. Implement strict access controls on generated reports to prevent unauthorized disclosure of PII. | LLM | SKILL.md:30 |
Scan History
Embed Code
[](https://skillshield.io/report/05e58aa6ded688fc)
Powered by SkillShield