Trust Assessment
input-guard received a trust score of 10/100, placing it in the Untrusted category. This skill has significant security findings that require attention before use in production.
SkillShield's automated analysis identified 24 findings: 9 critical, 9 high, 5 medium, and 0 low severity. Key findings include Network egress to untrusted endpoints, Unsafe environment variable passthrough, Arbitrary command execution.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The Manifest Analysis layer scored lowest at 0/100, indicating areas for improvement.
Last analyzed on February 13, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings24
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Network egress to untrusted endpoints Python requests POST/PUT to URL Review all outbound network calls. Remove connections to webhook collectors, paste sites, and raw IP addresses. Legitimate API calls should use well-known service domains. | Manifest | skills/dgriffin831/input-guard/scripts/llm_scanner.py:179 | |
| CRITICAL | Network egress to untrusted endpoints Python requests POST/PUT to URL Review all outbound network calls. Remove connections to webhook collectors, paste sites, and raw IP addresses. Legitimate API calls should use well-known service domains. | Manifest | skills/dgriffin831/input-guard/scripts/llm_scanner.py:204 | |
| CRITICAL | Arbitrary command execution Python shell execution (os.system, subprocess) Review all shell execution calls. Ensure commands are static (not built from user input), use absolute paths, and are strictly necessary. Prefer library APIs over shell commands. | Manifest | skills/dgriffin831/input-guard/evals/run.py:67 | |
| CRITICAL | Arbitrary command execution Python shell execution (os.system, subprocess) Review all shell execution calls. Ensure commands are static (not built from user input), use absolute paths, and are strictly necessary. Prefer library APIs over shell commands. | Manifest | skills/dgriffin831/input-guard/scripts/llm_scanner.py:155 | |
| CRITICAL | Arbitrary command execution Python shell execution (os.system, subprocess) Review all shell execution calls. Ensure commands are static (not built from user input), use absolute paths, and are strictly necessary. Prefer library APIs over shell commands. | Manifest | skills/dgriffin831/input-guard/scripts/scan.py:68 | |
| CRITICAL | Credential harvesting Reading well-known credential environment variables Skills should only access environment variables they explicitly need. Bulk environment dumps (os.environ.copy, JSON.stringify(process.env)) are almost always malicious. Remove access to Keychain, GPG keys, and credential stores. | Manifest | skills/dgriffin831/input-guard/scripts/llm_scanner.py:144 | |
| CRITICAL | Credential harvesting Reading well-known credential environment variables Skills should only access environment variables they explicitly need. Bulk environment dumps (os.environ.copy, JSON.stringify(process.env)) are almost always malicious. Remove access to Keychain, GPG keys, and credential stores. | Manifest | skills/dgriffin831/input-guard/scripts/llm_scanner.py:149 | |
| CRITICAL | Credential harvesting Reading well-known credential environment variables Skills should only access environment variables they explicitly need. Bulk environment dumps (os.environ.copy, JSON.stringify(process.env)) are almost always malicious. Remove access to Keychain, GPG keys, and credential stores. | Manifest | skills/dgriffin831/input-guard/scripts/llm_scanner.py:297 | |
| CRITICAL | Credential harvesting Reading well-known credential environment variables Skills should only access environment variables they explicitly need. Bulk environment dumps (os.environ.copy, JSON.stringify(process.env)) are almost always malicious. Remove access to Keychain, GPG keys, and credential stores. | Manifest | skills/dgriffin831/input-guard/scripts/llm_scanner.py:299 | |
| HIGH | Unsafe environment variable passthrough Access to well-known credential environment variables Minimize environment variable exposure. Only pass required, non-sensitive variables to MCP servers. Use dedicated secret management instead of environment passthrough. | Manifest | skills/dgriffin831/input-guard/scripts/llm_scanner.py:144 | |
| HIGH | Unsafe environment variable passthrough Access to well-known credential environment variables Minimize environment variable exposure. Only pass required, non-sensitive variables to MCP servers. Use dedicated secret management instead of environment passthrough. | Manifest | skills/dgriffin831/input-guard/scripts/llm_scanner.py:149 | |
| HIGH | Unsafe environment variable passthrough Access to well-known credential environment variables Minimize environment variable exposure. Only pass required, non-sensitive variables to MCP servers. Use dedicated secret management instead of environment passthrough. | Manifest | skills/dgriffin831/input-guard/scripts/llm_scanner.py:297 | |
| HIGH | Unsafe environment variable passthrough Access to well-known credential environment variables Minimize environment variable exposure. Only pass required, non-sensitive variables to MCP servers. Use dedicated secret management instead of environment passthrough. | Manifest | skills/dgriffin831/input-guard/scripts/llm_scanner.py:299 | |
| HIGH | Dangerous call: subprocess.run() Call to 'subprocess.run()' detected in function 'run_scan'. This can execute arbitrary code. Avoid using dangerous functions like exec/eval/os.system. Use safer alternatives. | Static | skills/dgriffin831/input-guard/evals/run.py:67 | |
| HIGH | Dangerous call: subprocess.run() Call to 'subprocess.run()' detected in function '_detect_provider'. This can execute arbitrary code. Avoid using dangerous functions like exec/eval/os.system. Use safer alternatives. | Static | skills/dgriffin831/input-guard/scripts/llm_scanner.py:155 | |
| HIGH | Dangerous call: subprocess.run() Call to 'subprocess.run()' detected in function 'send_alert'. This can execute arbitrary code. Avoid using dangerous functions like exec/eval/os.system. Use safer alternatives. | Static | skills/dgriffin831/input-guard/scripts/scan.py:68 | |
| HIGH | Prompt Injection via Compromised Taxonomy Data The `llm_scanner.py` script constructs its internal LLM prompt using taxonomy data. This data is either loaded from `taxonomy.json` (shipped with the skill) or refreshed from the MoltThreats API (`api.promptintel.novahunting.ai`) if `PROMPTINTEL_API_KEY` is set. If the MoltThreats API or the `taxonomy.json` file is compromised, an attacker could inject malicious instructions into the taxonomy data. These instructions would then be fed to the internal LLM as part of its system prompt, potentially causing the scanner to misclassify threats, ignore actual injections, or even act maliciously itself, effectively 'prompt injecting' the scanner's own LLM. Implement cryptographic signing or integrity checks for the `taxonomy.json` file and data fetched from the MoltThreats API. Ensure the API endpoint is secured and trusted. Consider sandboxing the LLM calls or adding a secondary validation layer for the taxonomy data itself to prevent malicious instructions from reaching the internal LLM. | LLM | scripts/llm_scanner.py:40 | |
| HIGH | Arbitrary Code Execution via Configurable External Script Path The `report-to-molthreats.sh` script executes an external Python script (`molthreats.py`) using a path determined by the `OPENCLAW_WORKSPACE` or `MOLTHREATS_SCRIPT` environment variables. If an attacker can control these environment variables, they could redirect the script to execute an arbitrary malicious Python script instead of the legitimate `molthreats.py`. This could lead to arbitrary code execution within the agent's environment with the permissions of the executing user. Restrict the `MOLTHREATS_SCRIPT` path to a trusted, non-writable directory. If configuration is necessary, use a whitelist of allowed paths or ensure the path is resolved securely and not susceptible to environment variable manipulation by untrusted sources. Consider using a hash check for the `molthreats.py` script before execution to verify its integrity. | LLM | scripts/report-to-molthreats.sh:60 | |
| MEDIUM | Unsafe deserialization / dynamic eval Decryption followed by code execution Remove obfuscated code execution patterns. Legitimate code does not need base64-encoded payloads executed via eval, encrypted-then-executed blobs, or dynamic attribute resolution to call system functions. | Manifest | skills/dgriffin831/input-guard/scripts/llm_scanner.py:16 | |
| MEDIUM | Suspicious import: requests Import of 'requests' detected. This module provides network or low-level system access. Verify this import is necessary. Network and system modules in skill code may indicate data exfiltration. | Static | skills/dgriffin831/input-guard/scripts/get_taxonomy.py:15 | |
| MEDIUM | Suspicious import: requests Import of 'requests' detected. This module provides network or low-level system access. Verify this import is necessary. Network and system modules in skill code may indicate data exfiltration. | Static | skills/dgriffin831/input-guard/scripts/llm_scanner.py:25 | |
| MEDIUM | Sensitive environment variable access: $HOME Access to sensitive environment variable '$HOME' detected in shell context. Verify this environment variable access is necessary and the value is not exfiltrated. | Static | skills/dgriffin831/input-guard/scripts/report-to-molthreats.sh:30 | |
| MEDIUM | Unpinned Python dependency version Requirement 'requests>=2.28.0' is not pinned to an exact version. Pin Python dependencies with '==<exact version>'. | Dependencies | skills/dgriffin831/input-guard/requirements.txt:2 | |
| INFO | Untrusted Input Transmitted to Third-Party LLM Services The `llm_scanner.py` component, when enabled, sends the full untrusted input text to external LLM providers (OpenAI, Anthropic, or OpenClaw gateway) for semantic analysis. While this is the intended functionality for an LLM-powered scanner, it means that any sensitive or private information present in the untrusted input will be transmitted to these third-party services. Users should be aware of this data flow for privacy, compliance, and data residency considerations, as the content of the untrusted input is sent outside the local environment. Document this data transmission clearly to users. Advise users to consider the privacy implications when enabling LLM scanning, especially if the untrusted input might contain PII or confidential data. Provide options to disable LLM scanning or use local/on-premise LLMs if available and configured to keep sensitive data within a controlled environment. | LLM | scripts/llm_scanner.py:200 |
Scan History
Embed Code
[](https://skillshield.io/report/451f69d62c8a6b16)
Powered by SkillShield