Skip to main content

Security Audit

optimizing-attention-flash

github.com/davila7/claude-code-templates
AI SkillCommit 458b11867eae
90
TRUSTED
Scanned 9 days ago
0
Critical
Immediate action required
0
High
Priority fixes suggested
1
Medium
Best practices review
1
Low
Acknowledged / Tracked

Trust Assessment

optimizing-attention-flash received a trust score of 90/100, placing it in the Trusted category. This skill has passed all critical security checks and demonstrates strong security practices.

SkillShield's automated analysis identified 2 findings: 0 critical, 0 high, 1 medium, and 1 low severity. Key findings include Network egress to untrusted endpoints, Covert behavior / concealment directives.

The analysis covered 4 layers: dependency_graph, manifest_analysis, llm_behavioral_safety, static_code_analysis. All layers scored 70 or above, reflecting consistent security practices.

Last analyzed on February 11, 2026 (commit 458b1186). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.

Layer Breakdown

Manifest Analysis
91%
Static Code Analysis
100%
Dependency Graph
100%
LLM Behavioral Safety
100%

Behavioral Risk Signals

Network Access
1 finding

Security Findings2

SeverityFindingLayerLocation

Scan History

Embed Code

[![SkillShield](https://skillshield.io/api/v1/badge/1013b74b44c81485.svg)](https://skillshield.io/report/1013b74b44c81485)
SkillShield Badge

Powered by SkillShield