Trust Assessment
mlx-brain received a trust score of 77/100, placing it in the Mostly Trusted category. This skill has passed most security checks with only minor considerations noted.
SkillShield's automated analysis identified 4 findings: 0 critical, 0 high, 3 medium, and 1 low severity. Key findings include Missing required field: name, Sensitive environment variable access: $HOME, User input directly passed to LLM without sanitization.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on February 13, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings4
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| MEDIUM | Missing required field: name The 'name' field is required for claude_code skills but is missing from frontmatter. Add a 'name' field to the SKILL.md frontmatter. | Static | skills/kjaylee/mlx-brain/SKILL.md:1 | |
| MEDIUM | Sensitive environment variable access: $HOME Access to sensitive environment variable '$HOME' detected in shell context. Verify this environment variable access is necessary and the value is not exfiltrated. | Static | skills/kjaylee/mlx-brain/SKILL.md:5 | |
| MEDIUM | User input directly passed to LLM without sanitization The 'prompt' variable, derived directly from user input (either command-line arguments or JSON from stdin), is passed without any explicit sanitization or guardrails to the 'mlx_lm.generate' function. This allows an attacker to craft malicious prompts that could manipulate the underlying LLM's behavior, leading to unintended outputs, information disclosure (if the LLM has access to sensitive context), or other undesirable actions. While prompt injection is an inherent risk with LLMs, the skill does not implement any specific mitigation strategies to protect against it. Implement prompt engineering techniques (e.g., system prompts, few-shot examples) to guide the LLM's behavior. Consider adding input validation or sanitization if specific types of malicious input are known. If the LLM has access to tools or sensitive data, implement robust output filtering and access controls. | LLM | run.py:30 | |
| LOW | Unpinned 'mlx_lm' dependency The 'run.py' script imports the 'mlx_lm' library without specifying a version. This means that if a new, potentially malicious or incompatible version of 'mlx_lm' is published and installed in the 'mlx-env' environment, the skill could be compromised or break. This introduces a supply chain risk as the skill's behavior is dependent on an unversioned external package. Pin the 'mlx_lm' dependency to a specific, known-good version (e.g., 'mlx_lm==x.y.z') in the environment's dependency management configuration (e.g., 'requirements.txt' or 'pyproject.toml') to ensure consistent and secure execution. | LLM | run.py:7 |
Scan History
Embed Code
[](https://skillshield.io/report/524e6aad82380081)
Powered by SkillShield