Trust Assessment
brand-creative-suite received a trust score of 76/100, placing it in the Mostly Trusted category. This skill has passed most security checks with only minor considerations noted.
SkillShield's automated analysis identified 3 findings: 0 critical, 1 high, 1 medium, and 1 low severity. Key findings include Missing required field: name, Node lockfile missing, User-controlled input directly injected into AI prompt.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on February 13, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings3
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| HIGH | User-controlled input directly injected into AI prompt The skill constructs AI prompts by directly interpolating user-provided parameters (e.g., `BRAND_NAME`, `BRAND_MATERIAL`) into predefined templates. As shown in `templates.js`, these parameters are used to build the final prompt string. Without explicit sanitization or robust input validation, a malicious user can inject arbitrary instructions into these parameters, potentially manipulating the behavior of the underlying Jimeng AI model. This could lead to the generation of unintended content, bypassing safety filters, or attempting to extract information if the AI model has access to it. Although 'Joi 或自定义验证器' is mentioned in the '技术栈' section of SKILL.md, no actual validation code is provided, making this a credible exploit path. Implement robust input validation and sanitization for all user-provided parameters before they are interpolated into the AI prompt. Consider using techniques such as: 1. **Allow-listing**: Restrict input to a predefined set of safe values. 2. **Escaping**: Escape special characters that might be interpreted as instructions by the target LLM. 3. **Contextual separation**: Clearly delineate user input from system instructions within the prompt using specific tokens or formatting that the LLM is trained to respect. 4. **LLM-based input validation**: Use a separate LLM call to validate or rephrase user input to remove malicious instructions before it's used in the main prompt. | LLM | templates.js:7 | |
| MEDIUM | Missing required field: name The 'name' field is required for claude_code skills but is missing from frontmatter. Add a 'name' field to the SKILL.md frontmatter. | Static | skills/hhhh124hhhh/brand-creative-suite/SKILL.md:1 | |
| LOW | Node lockfile missing package.json is present but no lockfile was found (package-lock.json, pnpm-lock.yaml, or yarn.lock). Commit a lockfile for deterministic dependency resolution. | Dependencies | skills/hhhh124hhhh/brand-creative-suite/package.json |
Scan History
Embed Code
[](https://skillshield.io/report/40b5f4882577d24c)
Powered by SkillShield