Skip to main content

Security Audit

lmstudio-subagents

github.com/openclaw/skills
AI SkillCommit 13146e6a3d46
10
CRITICAL
Scanned 2 months ago
13
Critical
Immediate action required
2
High
Priority fixes suggested
2
Medium
Best practices review
0
Low
Acknowledged / Tracked

Trust Assessment

lmstudio-subagents received a trust score of 10/100, placing it in the Untrusted category. This skill has significant security findings that require attention before use in production.

SkillShield's automated analysis identified 18 findings: 13 critical, 2 high, 2 medium, and 0 low severity. Key findings include Network egress to untrusted endpoints, Unsanitized user input in `exec command:` directives, Arbitrary file write via `--log` option in `lmstudio-api.mjs`.

The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The Manifest Analysis layer scored lowest at 0/100, indicating areas for improvement.

Last analyzed on February 13, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.

Layer Breakdown

Manifest Analysis
0%
Static Code Analysis
100%
Dependency Graph
100%
LLM Behavioral Safety
63%

Behavioral Risk Signals

Network Access
17 findings
Filesystem Write
1 finding
Shell Execution
2 findings
Dynamic Code
1 finding
Excessive Permissions
2 findings

Security Findings18

SeverityFindingLayerLocation

Scan History

Embed Code

[![SkillShield](https://skillshield.io/api/v1/badge/83c97d2d1bb8c8fb.svg)](https://skillshield.io/report/83c97d2d1bb8c8fb)
SkillShield Badge

Powered by SkillShield