Security Audit
snyk/agent-scan:tests/skills/algorithmic-art
github.com/snyk/agent-scanTrust Assessment
snyk/agent-scan:tests/skills/algorithmic-art received a trust score of 85/100, placing it in the Mostly Trusted category. This skill has passed most security checks with only minor considerations noted.
SkillShield's automated analysis identified 1 finding: 0 critical, 0 high, 0 medium, and 1 low severity. Key findings include Excessive LLM Output Manipulation.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on March 1, 2026 (commit 30a672c5). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings1
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| LOW | Excessive LLM Output Manipulation The skill explicitly instructs the host LLM to repeat specific phrases and emphasize certain points 'REPEATEDLY' and 'multiple times'. While not a malicious prompt injection attempting to bypass safety, this constitutes a strong manipulation of the LLM's output generation style and content, forcing an artificial rhetorical pattern rather than allowing natural language generation based on the core task. This falls under the definition of 'Instructions that manipulate the host LLM'. Rephrase instructions to guide the LLM towards desired content and tone without explicitly dictating repetition or specific phrasing. Allow the LLM to generate natural and varied expressions of the core message. | LLM | SKILL.md:50 |
Scan History
Embed Code
[](https://skillshield.io/report/4e97dd43441dffd1)
Powered by SkillShield