Security Audit
content-creator
github.com/sickn33/antigravity-awesome-skillsTrust Assessment
content-creator received a trust score of 79/100, placing it in the Mostly Trusted category. This skill has passed most security checks with only minor considerations noted.
SkillShield's automated analysis identified 2 findings: 0 critical, 2 high, 0 medium, and 0 low severity. Key findings include Potential Command Injection via `grep` example in SKILL.md, Potential Command Injection via `cp` example in SKILL.md.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on February 20, 2026 (commit e36d6fd3). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings2
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| HIGH | Potential Command Injection via `grep` example in SKILL.md The `SKILL.md` documentation, which is treated as untrusted input, includes an example `grep` command (`grep -f references/brand_guidelines.md content.txt`). If the host LLM were to interpret this example as an instruction to execute and substitute `content.txt` with user-provided input, a malicious user could inject arbitrary shell commands. For instance, if `content.txt` is controlled by the user, they could provide `"; rm -rf /"` to execute arbitrary commands on the underlying system. Avoid documenting direct shell commands that take user-controlled input. If shell commands are necessary, ensure they are wrapped in a safe execution environment or use a dedicated tool function that sanitizes inputs. For documentation, consider using placeholders that clearly indicate user input and warn against malicious input, or provide Python-based alternatives. | LLM | SKILL.md:200 | |
| HIGH | Potential Command Injection via `cp` example in SKILL.md The `SKILL.md` documentation, which is treated as untrusted input, includes an example `cp` command (`cp assets/content_calendar_template.md this_month_calendar.md`). If the host LLM were to interpret this example as an instruction to execute and substitute `this_month_calendar.md` with user-provided input, a malicious user could inject arbitrary shell commands. For instance, if `this_month_calendar.md` is controlled by the user, they could provide `$(malicious_command)` or `"; rm -rf /"` to execute arbitrary commands on the underlying system. Avoid documenting direct shell commands that take user-controlled input. If shell commands are necessary, ensure they are wrapped in a safe execution environment or use a dedicated tool function that sanitizes inputs. For documentation, consider using placeholders that clearly indicate user input and warn against malicious input, or provide Python-based alternatives. | LLM | SKILL.md:203 |
Scan History
Embed Code
[](https://skillshield.io/report/218c502eaad47bc9)
Powered by SkillShield