Skip to main content

Security Audit

evaluating-code-models

github.com/davila7/claude-code-templates
AI SkillCommit 458b11867eae
59
CAUTION
Scanned 12 days ago
1
Critical
Immediate action required
0
High
Priority fixes suggested
1
Medium
Best practices review
2
Low
Acknowledged / Tracked

Trust Assessment

evaluating-code-models received a trust score of 59/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.

SkillShield's automated analysis identified 4 findings: 1 critical, 0 high, 1 medium, and 2 low severity. Key findings include Network egress to untrusted endpoints, Covert behavior / concealment directives, Direct execution of untrusted code on host system via `--allow_code_execution` and `--trust_remote_code`.

The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 68/100, indicating areas for improvement.

Last analyzed on February 12, 2026 (commit 458b1186). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.

Layer Breakdown

Manifest Analysis
91%
Static Code Analysis
100%
Dependency Graph
100%
LLM Behavioral Safety
68%

Behavioral Risk Signals

Network Access
2 findings
Filesystem Write
1 finding
Shell Execution
1 finding
Dynamic Code
2 findings

Security Findings4

SeverityFindingLayerLocation

Scan History

Embed Code

[![SkillShield](https://skillshield.io/api/v1/badge/b5626b3cf1a9aff2.svg)](https://skillshield.io/report/b5626b3cf1a9aff2)
SkillShield Badge

Powered by SkillShield