Security Audit
machine-learning-ops-ml-pipeline
github.com/sickn33/antigravity-awesome-skillsTrust Assessment
machine-learning-ops-ml-pipeline received a trust score of 72/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 1 finding: 1 critical, 0 high, 0 medium, and 0 low severity. Key findings include Prompt Injection via unvalidated user input and agent outputs.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on February 20, 2026 (commit e36d6fd3). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings1
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Prompt Injection via unvalidated user input and agent outputs The skill directly interpolates the `$ARGUMENTS` placeholder (representing user input) into the main skill prompt and several sub-agent prompts without apparent sanitization. For example, on line 5, `$ARGUMENTS` is used directly. Furthermore, the outputs of previous sub-agents (`{phaseX.agent.output}`) are interpolated into subsequent sub-agent prompts (e.g., line 59, line 80, line 105, etc.). This creates a significant prompt injection vulnerability, allowing a malicious user to craft `$ARGUMENTS` to inject new instructions, override existing ones, or manipulate the behavior of the host LLM and downstream sub-agents. A successful injection could lead to arbitrary code generation, data exposure, or unintended actions. Implement robust input sanitization and validation for `$ARGUMENTS` before interpolation. Consider using templating engines that escape user input by default or pass user input as a separate variable to the LLM rather than directly interpolating it into the prompt string. For agent outputs, ensure that agent responses are validated and sanitized before being used as input to subsequent agents, especially if those outputs are derived from or influenced by untrusted user input. This could involve parsing outputs into structured data and only using specific, validated fields. | LLM | SKILL.md:5 |
Scan History
Embed Code
[](https://skillshield.io/report/f1591e4051ca1052)
Powered by SkillShield