Trust Assessment
prop-extractor received a trust score of 81/100, placing it in the Mostly Trusted category. This skill has passed most security checks with only minor considerations noted.
SkillShield's automated analysis identified 2 findings: 0 critical, 1 high, 1 medium, and 0 low severity. Key findings include Direct Shell Command Execution Instruction, Potential Cross-Site Scripting (XSS) in Generated HTML Output.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on February 14, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings2
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| HIGH | Direct Shell Command Execution Instruction The skill explicitly instructs the agent to execute a shell command `open {文件路径}` to display a generated HTML file. This is a direct command injection vulnerability within the skill's instructions. If the host LLM were to follow this instruction from the untrusted skill content, it could lead to arbitrary command execution on the host system, especially if the `{文件路径}` variable can be influenced by untrusted input. Executing external commands directly from skill instructions poses a significant security risk. Avoid direct shell command execution instructions within skill definitions. If displaying a file is necessary, the skill should return the file content or a path to the file, allowing the agent's secure environment or the user to handle the display. Ensure all variables used in file paths are strictly sanitized and validated to prevent path traversal or command injection. | LLM | SKILL.md:280 | |
| MEDIUM | Potential Cross-Site Scripting (XSS) in Generated HTML Output The skill generates an HTML file using placeholders such as `{{PROJECT_NAME}}`, `{{GALLERY_SECTIONS}}`, and `{{PROP_DATA_JSON}}`. If these placeholders are populated directly from untrusted user input without proper HTML escaping, it could lead to Cross-Site Scripting (XSS) or HTML injection in the generated file. A malicious user could inject `<script>` tags or other HTML elements, which would execute when the generated HTML is viewed, potentially leading to data theft, defacement, or other client-side attacks. This represents a form of prompt injection where untrusted input manipulates the LLM's output generation to produce malicious content. Ensure all user-provided or untrusted data inserted into the HTML template is properly HTML-escaped before insertion. For JSON data, ensure it's correctly JSON-encoded and then embedded securely within a `<script type='application/json'>` block or similar, rather than directly into executable JavaScript code. | LLM | SKILL.md:260 |
Scan History
Embed Code
[](https://skillshield.io/report/bedbbb63a4f873e0)
Powered by SkillShield