Trust Assessment
langcache received a trust score of 79/100, placing it in the Mostly Trusted category. This skill has passed most security checks with only minor considerations noted.
SkillShield's automated analysis identified 2 findings: 0 critical, 1 high, 1 medium, and 0 low severity. Key findings include Sensitive environment variable access: $HOME, JSON Injection via unescaped user input in Bash script's curl data payload.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on February 13, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings2
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| HIGH | JSON Injection via unescaped user input in Bash script's curl data payload The `scripts/langcache.sh` Bash script constructs JSON payloads for `curl` requests by directly concatenating user-provided strings (e.g., `prompt`, `response`, `entry_id`, and attribute `key`/`value`). If these strings contain double quotes (`"`) or backslashes (`\`), they can break the JSON structure or inject arbitrary JSON fields. This vulnerability could lead to:
1. **API Manipulation**: Injecting malicious fields into the LangCache API request, potentially altering its intended behavior.
2. **Prompt Injection**: If the LangCache API processes the injected JSON and passes it to an underlying LLM, an attacker could manipulate the LLM's instructions or context.
3. **Data Exfiltration**: If the API supports fields that could redirect data (e.g., `callback_url`, `log_endpoint`), an attacker could inject these to exfiltrate sensitive information.
For example, if a user-controlled prompt is `What is semantic caching?", "malicious_field": "exfiltrate_data`, the resulting JSON payload would be valid and contain the injected field, potentially leading to unintended consequences. All user-provided strings (e.g., `prompt`, `response`, `entry_id`, attribute `key`, attribute `value`) must be properly JSON-escaped before being embedded into the `data` payload. A robust solution in Bash is to use `jq -R .` to escape each string. For example, `escaped_prompt=$(jq -R . <<< "$prompt")` and then use `"$escaped_prompt"` in the JSON string. This should be applied consistently to all user-controlled inputs that form part of a JSON payload. | LLM | scripts/langcache.sh:170 | |
| MEDIUM | Sensitive environment variable access: $HOME Access to sensitive environment variable '$HOME' detected in shell context. Verify this environment variable access is necessary and the value is not exfiltrated. | Static | skills/manvinder01/openclaw-langcache/scripts/langcache.sh:22 |
Scan History
Embed Code
[](https://skillshield.io/report/2f9812b573db8ca3)
Powered by SkillShield