Trust Assessment
remember-all-prompts-daily received a trust score of 10/100, placing it in the Untrusted category. This skill has significant security findings that require attention before use in production.
SkillShield's automated analysis identified 7 findings: 2 critical, 3 high, 1 medium, and 0 low severity. Key findings include Arbitrary command execution, Dangerous call: subprocess.run(), Re-ingestion of archived user prompts creates self-inflicted prompt injection risk.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The Manifest Analysis layer scored lowest at 40/100, indicating areas for improvement.
Last analyzed on February 13, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings7
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Arbitrary command execution Python shell execution (os.system, subprocess) Review all shell execution calls. Ensure commands are static (not built from user input), use absolute paths, and are strictly necessary. Prefer library APIs over shell commands. | Manifest | skills/syedateebulislam/remember-all-prompts-daily/scripts/check_token_usage.py:17 | |
| CRITICAL | Arbitrary command execution Python shell execution (os.system, subprocess) Review all shell execution calls. Ensure commands are static (not built from user input), use absolute paths, and are strictly necessary. Prefer library APIs over shell commands. | Manifest | skills/syedateebulislam/remember-all-prompts-daily/scripts/check_token_usage.py:40 | |
| HIGH | Dangerous call: subprocess.run() Call to 'subprocess.run()' detected in function 'get_token_usage'. This can execute arbitrary code. Avoid using dangerous functions like exec/eval/os.system. Use safer alternatives. | Static | skills/syedateebulislam/remember-all-prompts-daily/scripts/check_token_usage.py:17 | |
| HIGH | Dangerous call: subprocess.run() Call to 'subprocess.run()' detected in function 'trigger_export'. This can execute arbitrary code. Avoid using dangerous functions like exec/eval/os.system. Use safer alternatives. | Static | skills/syedateebulislam/remember-all-prompts-daily/scripts/check_token_usage.py:40 | |
| HIGH | Re-ingestion of archived user prompts creates self-inflicted prompt injection risk The `ingest_prompts.py` script reads previously archived user prompts and responses from `~/.clawd/memory/remember-all-prompts-daily.md` and formats them to be re-ingested as 'past conversation summary' into a new LLM session. If a user (or an attacker who gained control of the user's session) previously entered a malicious prompt, that prompt would be re-introduced into the LLM's context, potentially manipulating its behavior or extracting information. While this is a core design feature for continuity, it inherently carries a prompt injection risk as the LLM will process user-controlled content from a previous interaction. Implement sanitization or strict parsing of archived content before re-ingestion to neutralize potential malicious instructions. Alternatively, provide clear warnings to users about the implications of re-ingesting untrusted content, even if it originated from their own past sessions. Consider a 'safe mode' for ingestion that only includes factual summaries rather than raw prompts. | LLM | scripts/ingest_prompts.py:49 | |
| MEDIUM | External CLI command execution via `subprocess.run` The `check_token_usage.py` script executes an external command `clawdbot session-status --json` using `subprocess.run`. While the arguments are hardcoded and appear safe in this specific context, any use of `subprocess.run` to invoke external binaries introduces a dependency on the security of that binary and the environment it runs in. A compromised `clawdbot` CLI or a manipulated `PATH` environment variable could potentially lead to arbitrary command execution. If possible, use a direct Python API for `clawdbot` session status instead of shelling out to the CLI. If CLI execution is necessary, ensure the full path to the `clawdbot` executable is used to prevent `PATH` manipulation, and validate the output rigorously. | LLM | scripts/check_token_usage.py:14 | |
| INFO | Storage of full session history (prompts and responses) on local filesystem The skill stores the complete history of user prompts and LLM responses in plaintext files (`~/.clawd/memory/remember-all-prompts-daily.md` and `~/.clawd/memory/.session-ingest.md`) on the local filesystem. While this is the intended functionality for conversation continuity and is not exfiltration to an external service, it means that sensitive information from conversations is persistently stored. If the local system is compromised, this data could be accessed by an attacker. Inform users clearly about the local storage of their conversation history, including the location and format. Consider implementing encryption for the archived files, especially if highly sensitive data might be processed. Provide options for users to clear or manage their archived history. | LLM | scripts/export_prompts.py:39 |
Scan History
Embed Code
[](https://skillshield.io/report/763a469e92edd9a7)
Powered by SkillShield