Skip to main content

Security Audit

doc-coauthoring

github.com/davila7/claude-code-templates
AI SkillCommit 458b11867eae
55
CAUTION
Scanned 12 days ago
1
Critical
Immediate action required
2
High
Priority fixes suggested
2
Medium
Best practices review
1
Low
Acknowledged / Tracked

Trust Assessment

doc-coauthoring received a trust score of 55/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.

SkillShield's automated analysis identified 6 findings: 1 critical, 2 high, 2 medium, and 1 low severity. Key findings include Network egress to untrusted endpoints, Covert behavior / concealment directives, Skill definition treated as untrusted input (Prompt Injection).

The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 33/100, indicating areas for improvement.

Last analyzed on February 12, 2026 (commit 458b1186). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.

Layer Breakdown

Manifest Analysis
91%
Static Code Analysis
100%
Dependency Graph
100%
LLM Behavioral Safety
33%

Behavioral Risk Signals

Network Access
2 findings
Filesystem Write
2 findings
Shell Execution
2 findings
Dynamic Code
1 finding
Excessive Permissions
2 findings

Security Findings6

SeverityFindingLayerLocation

Scan History

Embed Code

[![SkillShield](https://skillshield.io/api/v1/badge/460322f8b4a4f0ca.svg)](https://skillshield.io/report/460322f8b4a4f0ca)
SkillShield Badge

Powered by SkillShield