Trust Assessment
aieos received a trust score of 10/100, placing it in the Untrusted category. This skill has significant security findings that require attention before use in production.
SkillShield's automated analysis identified 10 findings: 5 critical, 3 high, 2 medium, and 0 low severity. Key findings include Persistence / self-modification instructions, Suspicious import: urllib.request, Potential data exfiltration: file read + network send.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The Manifest Analysis layer scored lowest at 0/100, indicating areas for improvement.
Last analyzed on February 13, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings10
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Persistence / self-modification instructions Shell RC file modification for persistence Remove any persistence mechanisms. Skills should not modify system startup configurations, crontabs, LaunchAgents, systemd services, or shell profiles. | Manifest | skills/zeglin/aieos/scripts/aieos_tool.py:404 | |
| CRITICAL | Persistence / self-modification instructions Shell RC file modification for persistence Remove any persistence mechanisms. Skills should not modify system startup configurations, crontabs, LaunchAgents, systemd services, or shell profiles. | Manifest | skills/zeglin/aieos/scripts/aieos_tool.py:405 | |
| CRITICAL | Persistence / self-modification instructions Shell RC file modification for persistence Remove any persistence mechanisms. Skills should not modify system startup configurations, crontabs, LaunchAgents, systemd services, or shell profiles. | Manifest | skills/zeglin/aieos/scripts/aieos_tool.py:406 | |
| CRITICAL | Persistence / self-modification instructions Shell RC file modification for persistence Remove any persistence mechanisms. Skills should not modify system startup configurations, crontabs, LaunchAgents, systemd services, or shell profiles. | Manifest | skills/zeglin/aieos/scripts/aieos_tool.py:407 | |
| CRITICAL | Arbitrary File Write (Command Injection / Excessive Permissions) The `export_identity` and `generate_bio_page` functions allow writing arbitrary JSON or HTML content (derived from user-controlled schema data) to any file path specified by the user via the `--output` argument. This can be exploited to overwrite critical system files (e.g., `/etc/passwd`, `~/.bashrc`, `~/.ssh/authorized_keys`) with attacker-controlled content, leading to command injection or system compromise. Restrict output file paths to a designated, sandboxed output directory within the skill's workspace. Do not allow arbitrary file paths for writing. Validate that the path is not absolute and does not contain directory traversal sequences (e.g., `../`). | LLM | scripts/aieos_tool.py:230 | |
| HIGH | Potential data exfiltration: file read + network send Function 'load_schema' reads files and sends data over the network. This may indicate data exfiltration. Review this function to ensure file contents are not being sent to external servers. | Static | skills/zeglin/aieos/scripts/aieos_tool.py:48 | |
| HIGH | Arbitrary File Read (Data Exfiltration) The `load_schema` function allows reading content from arbitrary local file paths specified by the user via the `--source` argument. An attacker can provide a path to sensitive files (e.g., `/etc/passwd`, `~/.ssh/id_rsa`, environment files) to read their content. Although `json.loads()` might fail on non-JSON files, the content is read into memory, constituting data exfiltration. Restrict file access to a designated, sandboxed directory for schema files. Implement a whitelist for allowed file types or paths, or validate that the path is within the expected skill's workspace and not an absolute path or traversing directories (e.g., `../`). | LLM | scripts/aieos_tool.py:30 | |
| HIGH | Markdown Injection in Agent Context (Prompt Injection) The `update_identity_files` function constructs markdown content for `IDENTITY.md` and `SOUL.md` using user-provided data from the AIEOS schema. If the schema contains malicious markdown (e.g., `[link](http://attacker.com)`, ``, or instructions like 'ignore previous instructions'), this content will be written to files that are part of the agent's context. This could lead to prompt injection, data exfiltration (via markdown links), or manipulation of the agent's behavior when these files are later consumed by the host LLM. Sanitize all user-provided strings before embedding them into markdown files that will be consumed by an LLM. Implement strict markdown rendering or escape special characters (e.g., `[`, `]`, `(`, `)`, `#`, `*`) to prevent injection. | LLM | scripts/aieos_tool.py:70 | |
| MEDIUM | Suspicious import: urllib.request Import of 'urllib.request' detected. This module provides network or low-level system access. Verify this import is necessary. Network and system modules in skill code may indicate data exfiltration. | Static | skills/zeglin/aieos/scripts/aieos_tool.py:20 | |
| MEDIUM | HTML Injection / Cross-Site Scripting (XSS) in Generated Bio Page The `generate_bio_page` function constructs an HTML file using user-provided data from the AIEOS schema. Without proper sanitization of all user-controlled fields before embedding them into the HTML, this could lead to HTML injection (Cross-Site Scripting - XSS). If the generated HTML file is viewed in a browser, an attacker could inject malicious scripts to exfiltrate data from the user's browser, perform phishing, or execute arbitrary client-side code. Sanitize all user-provided strings before embedding them into HTML files. Use an HTML templating engine with auto-escaping capabilities (e.g., Jinja2) or manually escape HTML special characters (e.g., `<`, `>`, `&`, `'`, `"`) for all dynamic content. | LLM | scripts/aieos_tool.py:260 |
Scan History
Embed Code
[](https://skillshield.io/report/72ea4f41138659e4)
Powered by SkillShield