Security Audit
mtls-configuration
github.com/sickn33/antigravity-awesome-skillsTrust Assessment
mtls-configuration received a trust score of 85/100, placing it in the Mostly Trusted category. This skill has passed most security checks with only minor considerations noted.
SkillShield's automated analysis identified 1 finding: 0 critical, 1 high, 0 medium, and 0 low severity. Key findings include Prompt injection via file access instruction.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on February 20, 2026 (commit e36d6fd3). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings1
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| HIGH | Prompt injection via file access instruction The skill's instructions include a direct command for the host LLM to 'open' a local file (`resources/implementation-playbook.md`). If the host LLM is designed to interpret and execute such commands from untrusted skill content, a malicious actor could potentially manipulate the LLM to open or read arbitrary files on the system by modifying this instruction or similar ones. This could lead to unauthorized data access or further system compromise. Avoid direct instructions to the host LLM to perform actions like 'open file' within untrusted skill content. Instead, provide file paths as references for the LLM to *consider* or *suggest* to the user, without issuing a direct command. If file access is necessary, ensure it's done through a sandboxed, permission-controlled API. | LLM | SKILL.md:17 |
Scan History
Embed Code
[](https://skillshield.io/report/a58b7e83668c3e47)
Powered by SkillShield