Trust Assessment
model-usage received a trust score of 64/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 3 findings: 0 critical, 2 high, 1 medium, and 0 low severity. Key findings include Suspicious import: requests, Potential data exfiltration: file read + network send, Accesses internal agent OAuth token and uses it with internal Google API.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on February 14, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings3
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| HIGH | Potential data exfiltration: file read + network send Function 'get_quota' reads files and sends data over the network. This may indicate data exfiltration. Review this function to ensure file contents are not being sent to external servers. | Static | skills/ls18166407597-design/ag-model-usage/scripts/model_usage.py:25 | |
| HIGH | Accesses internal agent OAuth token and uses it with internal Google API The skill reads an OAuth access token from the agent's internal credential store located at `~/.openclaw/agents/main/agent/auth-profiles.json`. It then uses this token to authenticate requests to `https://daily-cloudcode-pa.sandbox.googleapis.com/v1internal:fetchAvailableModels`. This endpoint is an internal Google API (`v1internal` and `sandbox.googleapis.com`) not intended for general public skill consumption. The `SKILL.md` explicitly states the skill 'simulates the behavior of the official IDE client' to query quotas. This pattern of accessing internal agent credentials and using them with internal, undocumented APIs poses a significant risk. It could lead to credential misuse, unintended side effects, or expose the agent to vulnerabilities if the OAuth token has broader permissions than strictly necessary for this specific query (Excessive Permissions). 1. **Avoid direct access to internal agent credential stores:** Skills should ideally use officially provided, scoped APIs for authentication rather than directly reading internal files. If direct access is unavoidable, ensure the file is strictly permissioned and the skill's execution environment is sandboxed. 2. **Use public, documented APIs:** Relying on internal (`v1internal`) or sandbox (`sandbox.googleapis.com`) APIs is brittle and risky. These APIs can change without notice and may not have the same security guarantees as public APIs. Prefer stable, public APIs with clear documentation. 3. **Least Privilege Principle:** Ensure that any OAuth token used by a skill has the absolute minimum scope required for its function. If this skill *must* use an internal token, its scope should be strictly limited to quota queries and verified by the platform. 4. **Platform-level mitigation:** The OpenClaw platform should provide secure mechanisms for skills to access necessary credentials without exposing internal files or requiring direct access to potentially over-privileged tokens. | LLM | scripts/model_usage.py:20 | |
| MEDIUM | Suspicious import: requests Import of 'requests' detected. This module provides network or low-level system access. Verify this import is necessary. Network and system modules in skill code may indicate data exfiltration. | Static | skills/ls18166407597-design/ag-model-usage/scripts/model_usage.py:1 |
Scan History
Embed Code
[](https://skillshield.io/report/bd7e4a5a4b3ad079)
Powered by SkillShield