Security Audit
prompt-engineering-patterns
github.com/sickn33/antigravity-awesome-skillsTrust Assessment
prompt-engineering-patterns received a trust score of 85/100, placing it in the Mostly Trusted category. This skill has passed most security checks with only minor considerations noted.
SkillShield's automated analysis identified 1 finding: 0 critical, 1 high, 0 medium, and 0 low severity. Key findings include Prompt Template Formatting Allows LLM Prompt Injection.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on February 20, 2026 (commit e36d6fd3). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings1
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| HIGH | Prompt Template Formatting Allows LLM Prompt Injection The `PromptOptimizer` class constructs prompts using `prompt_template.format(**test_case.input)`. If `prompt_template` or the values within `test_case.input` are derived from untrusted sources (e.g., user-provided inputs to the skill), a malicious user could craft inputs to inject harmful instructions or data into the LLM via the `llm_client`. This provides a direct vector for manipulating the LLM's behavior, potentially leading to unintended actions, data leakage, or denial of service against the connected LLM. Implement robust input validation and sanitization for both `prompt_template` and `test_case.input` to prevent malicious instruction injection. If the `llm_client` interacts with the host LLM, ensure the host LLM has its own strong safety and content moderation layers. Consider sandboxing the execution environment for prompt optimization if user-provided templates are allowed. | LLM | scripts/optimize-prompt.py:60 |
Scan History
Embed Code
[](https://skillshield.io/report/0eabf369c1dd0158)
Powered by SkillShield