Trust Assessment
ai-social-media-content received a trust score of 10/100, placing it in the Untrusted category. This skill has significant security findings that require attention before use in production.
SkillShield's automated analysis identified 5 findings: 3 critical, 1 high, 1 medium, and 0 low severity. Key findings include Arbitrary command execution, Remote code execution: curl/wget pipe to shell, Unsafe remote script execution via `curl | sh`.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 48/100, indicating areas for improvement.
Last analyzed on February 14, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings5
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Arbitrary command execution Remote code download piped to interpreter Review all shell execution calls. Ensure commands are static (not built from user input), use absolute paths, and are strictly necessary. Prefer library APIs over shell commands. | Manifest | skills/okaris/ai-social-media-content/SKILL.md:9 | |
| CRITICAL | Remote code execution: curl/wget pipe to shell Detected a pattern that downloads and immediately executes remote code. This is a primary malware delivery vector. Never pipe curl/wget output directly to a shell interpreter. | Static | skills/okaris/ai-social-media-content/SKILL.md:9 | |
| CRITICAL | Unsafe remote script execution via `curl | sh` The skill's quick start guide instructs users to execute a remote script directly via `curl -fsSL https://cli.inference.sh | sh`. This practice is highly dangerous as it allows arbitrary code execution on the user's system if the remote script is compromised or malicious. There is no version pinning or integrity check, making it a significant supply chain risk and a direct command injection vulnerability. Avoid piping remote scripts directly to `sh`. Instead, recommend a safer installation method, such as downloading a specific version, verifying its hash, and then executing it, or using a trusted package manager. If `infsh` is a trusted tool, provide instructions to install it via a more secure, version-controlled method. | LLM | SKILL.md:10 | |
| HIGH | Potential data exfiltration via user-controlled URLs in `infsh` commands The skill demonstrates `infsh app run` commands that accept `image_url` and `audio_url` parameters (e.g., for `bytedance/omnihuman-1-5` and `twitter/post-tweet`). Given the broad `Bash(infsh *)` permission, if a user provides a `file:///` URL pointing to sensitive local data, and the `infsh` tool is capable of reading local files and transmitting them to the external AI service, this could lead to data exfiltration. The skill does not appear to validate or sanitize these URLs. Implement strict validation and sanitization for all user-provided URLs to ensure they are legitimate external URLs (e.g., `http(s)://`) and do not point to local file paths (`file:///`). If local file access is intended, ensure it's explicitly controlled, sandboxed, and requires explicit user consent for each access. | LLM | SKILL.md:80 | |
| MEDIUM | Potential command injection via unquoted shell variable in filename construction The skill uses a `for` loop to generate filenames like `content_${topic// /_}.json`. If the `topic` variable were to contain malicious shell metacharacters (e.g., `"; rm -rf /; echo "`), it could lead to command injection during filename construction or execution of the `infsh` command, especially given the `Bash(*)` permission. While `TOPICS` is hardcoded in the example, the pattern is vulnerable if user input is introduced without proper escaping. Always properly quote and escape user-controlled variables when constructing shell commands or filenames. For filenames, consider using a dedicated function to sanitize input or generate unique, safe filenames. For JSON prompts, ensure user input is properly JSON-escaped before insertion into the JSON string. | LLM | SKILL.md:173 |
Scan History
Embed Code
[](https://skillshield.io/report/cf93b2a7f2116438)
Powered by SkillShield