Trust Assessment
agent-selfie received a trust score of 65/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 5 findings: 0 critical, 1 high, 3 medium, and 1 low severity. Key findings include Suspicious import: urllib.request, Node lockfile missing, Arbitrary File Read via Personality Argument.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on February 13, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings5
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| HIGH | Arbitrary File Read via Personality Argument The `load_personality` function in `scripts/selfie.py` allows the `--personality` command-line argument to specify a file path. The script then reads the content of this file using `path.read_text()`. This enables an attacker to read arbitrary files on the system (e.g., `/etc/passwd`, `~/.ssh/id_rsa`, or other sensitive configuration files) that the script has permissions to access. Although the script attempts to parse the file content as JSON, the act of reading the file itself constitutes a data exfiltration risk. Restrict the `--personality` argument to only accept inline JSON strings, or enforce strict validation to ensure file paths are within a designated, non-sensitive, and sandboxed directory. If reading from arbitrary files is a necessary feature, clearly document the security implications and ensure the skill runs with the absolute minimum necessary file system permissions. | LLM | scripts/selfie.py:101 | |
| MEDIUM | Suspicious import: urllib.request Import of 'urllib.request' detected. This module provides network or low-level system access. Verify this import is necessary. Network and system modules in skill code may indicate data exfiltration. | Static | skills/iisweetheartii/agent-selfie/scripts/selfie.py:12 | |
| MEDIUM | Prompt Injection via User-Controlled Personality Fields The `build_prompt` function directly interpolates user-controlled `style` and `vibe` fields from the personality configuration into the prompt string sent to the Gemini API. A malicious user could embed adversarial instructions or prompt injection techniques within these fields, potentially manipulating the Gemini LLM's behavior, leading to unintended image generation, or attempts to extract information from the LLM's context if the LLM is susceptible to such attacks. Implement input sanitization or validation for the `style` and `vibe` fields to remove or escape potentially harmful characters or phrases before they are incorporated into the LLM prompt. Consider using a more robust templating system that strictly separates user input from prompt structure, or apply LLM-specific prompt hardening techniques. | LLM | scripts/selfie.py:119 | |
| MEDIUM | Cross-Site Scripting (XSS) in Generated HTML Gallery The `write_gallery` function generates an HTML file (`gallery.html`) where the `prompt` string is directly inserted into a `<figcaption>` element without proper HTML escaping. Since the `prompt` string includes user-controlled `style` and `vibe` fields from the personality configuration, a malicious user could inject HTML tags or JavaScript code into these fields. When a user opens the generated `gallery.html` file in a web browser, this injected code would execute, leading to a client-side Cross-Site Scripting (XSS) vulnerability. Escape all user-controlled content, specifically `it["prompt"]`, before inserting it into the HTML output. Use an HTML escaping function (e.g., `html.escape` from Python's `html` module) to convert special characters like `<`, `>`, `&`, `"`, and `'` into their corresponding HTML entities. | LLM | scripts/selfie.py:170 | |
| LOW | Node lockfile missing package.json is present but no lockfile was found (package-lock.json, pnpm-lock.yaml, or yarn.lock). Commit a lockfile for deterministic dependency resolution. | Dependencies | skills/iisweetheartii/agent-selfie/package.json |
Scan History
Embed Code
[](https://skillshield.io/report/502640aff31c0987)
Powered by SkillShield