Trust Assessment
social-gen received a trust score of 58/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 3 findings: 1 critical, 1 high, 1 medium, and 0 low severity. Key findings include Unpinned npm dependency version, User-controlled file content directly injected into LLM prompt, Sensitive file content sent to external LLM API.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 55/100, indicating areas for improvement.
Last analyzed on February 13, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings3
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | User-controlled file content directly injected into LLM prompt The skill directly concatenates the entire content of a user-provided file into the LLM's user message without robust sanitization or clear delineation. An attacker can craft a malicious file containing instructions (e.g., "ignore previous instructions", "reveal system prompt", "summarize sensitive data") to manipulate the LLM's behavior. This could lead to overriding system instructions, extracting internal information, causing the LLM to generate harmful or unintended output, or exfiltrating sensitive data present in the input file by instructing the LLM to include it in the generated social media post. Implement robust input sanitization and clear delineation of user input within the prompt. Consider using structured input (e.g., XML tags, JSON) to explicitly mark user-provided content, allowing the LLM to distinguish it from system instructions. For example: `Turn this into a ${p} post:\n\n<user_content>${content}</user_content>`. | LLM | src/index.ts:30 | |
| HIGH | Sensitive file content sent to external LLM API The skill's core functionality involves reading the entire content of a user-specified file (`fs.readFileSync`) and transmitting it to the OpenAI API. While this is intended behavior for the skill, it poses a significant data exfiltration risk if the user is prompted or tricked into providing files containing sensitive information (e.g., API keys, personal identifiable information, proprietary code, `.env` files). This data will be processed and potentially stored by the third-party LLM provider (OpenAI). Clearly inform users about the data privacy implications of processing sensitive files with this skill and the third-party LLM. Advise against using the skill with highly confidential or regulated data. Consider implementing features for content redaction or local processing for sensitive data if feasible. | LLM | src/index.ts:37 | |
| MEDIUM | Unpinned npm dependency version Dependency 'commander' is not pinned to an exact version ('^12.1.0'). Pin dependencies to exact versions to reduce drift and supply-chain risk. | Dependencies | skills/lxgicstudios/social-gen/package.json |
Scan History
Embed Code
[](https://skillshield.io/report/2ab61b88c62a66a9)
Powered by SkillShield