Trust Assessment
openguardrails-for-openclaw received a trust score of 28/100, placing it in the Untrusted category. This skill has significant security findings that require attention before use in production.
SkillShield's automated analysis identified 4 findings: 2 critical, 1 high, 1 medium, and 0 low severity. Key findings include File read + network send exfiltration, Unpinned npm dependency version, Hardcoded API Key.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 55/100, indicating areas for improvement.
Last analyzed on February 13, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings4
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | File read + network send exfiltration SSH key/config file access Remove access to sensitive files not required by the skill's stated purpose. SSH keys, cloud credentials, and browser data should never be read by skills unless explicitly part of their declared functionality. | Manifest | skills/thomaslwang/openguardrails/test-injection.ts:45 | |
| CRITICAL | Asynchronous Blocking in Synchronous Hook Leads to Race Condition The `tool_result_persist` hook is designed to be synchronous, but the `runGuardAgent` call within it is asynchronous. The plugin attempts to block content if injection is detected (`config.blockOnRisk`), but because the analysis runs in a promise (`.then()`), the synchronous `tool_result_persist` hook will return *before* the analysis is complete and the `blocked` flag is set. This creates a race condition where the agent may process the 'malicious' content before the detection mechanism can effectively block it, rendering the blocking feature ineffective for initial processing. To ensure effective blocking, the `tool_result_persist` hook must either wait synchronously for the analysis result (which might block the main thread and is generally discouraged for long-running operations like LLM calls) or OpenClaw must provide an asynchronous hook that allows for blocking. If synchronous blocking is not feasible, consider alternative mitigation strategies such as post-action remediation (e.g., retracting agent actions if injection is detected later) or clearly documenting this limitation to users. | LLM | index.ts:142 | |
| HIGH | Hardcoded API Key The OpenAI API key for the OpenGuardrails service is hardcoded directly in the source code. This poses a significant security risk as it can be easily exposed in version control, build artifacts, or deployed environments, allowing unauthorized access to the OpenGuardrails API. Remove the hardcoded API key. Store API keys in environment variables, a secure configuration management system, or a secrets vault. Access them at runtime using `process.env.OG_API_KEY` or similar secure methods. | LLM | agent/config.ts:16 | |
| MEDIUM | Unpinned npm dependency version Dependency 'openai' is not pinned to an exact version ('^4.0.0'). Pin dependencies to exact versions to reduce drift and supply-chain risk. | Dependencies | skills/thomaslwang/openguardrails/package.json |
Scan History
Embed Code
[](https://skillshield.io/report/b14b66a29f66ab00)
Powered by SkillShield