Trust Assessment
crm received a trust score of 65/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 9 findings: 0 critical, 9 high, 0 medium, and 0 low severity. Key findings include YAML Injection via unsanitized user input in generated frontmatter, Markdown Injection via unsanitized user input in generated output/content, Arbitrary file write leading to data exfiltration.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 0/100, indicating areas for improvement.
Last analyzed on February 14, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings9
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| HIGH | YAML Injection via unsanitized user input in generated frontmatter The `crm-add.py` and `crm-import.py` scripts construct YAML frontmatter for new contact files using f-strings, directly embedding user-provided values (e.g., `name`, `company`, `role`, `tags`). If a malicious user (via a compromised LLM) provides input containing YAML control characters (e.g., newlines, colons, or specific keywords), it could lead to arbitrary YAML injection into the frontmatter. This could alter the intended schema, introduce malicious fields, or potentially trigger unexpected behavior in downstream YAML parsers. While `yaml.safe_load` is used for reading, the generated structure itself could be compromised. When generating YAML, use a YAML serialization library (like `yaml.dump`) to properly escape user-provided string values, rather than direct f-string interpolation. Ensure all user-controlled fields are passed through this serialization. | LLM | scripts/crm-add.py:79 | |
| HIGH | YAML Injection via unsanitized user input in generated frontmatter The `crm-add.py` and `crm-import.py` scripts construct YAML frontmatter for new contact files using f-strings, directly embedding user-provided values (e.g., `name`, `company`, `role`, `tags`). If a malicious user (via a compromised LLM) provides input containing YAML control characters (e.g., newlines, colons, or specific keywords), it could lead to arbitrary YAML injection into the frontmatter. This could alter the intended schema, introduce malicious fields, or potentially trigger unexpected behavior in downstream YAML parsers. While `yaml.safe_load` is used for reading, the generated structure itself could be compromised. When generating YAML, use a YAML serialization library (like `yaml.dump`) to properly escape user-provided string values, rather than direct f-string interpolation. Ensure all user-controlled fields are passed through this serialization. | LLM | scripts/crm-import.py:200 | |
| HIGH | Markdown Injection via unsanitized user input in generated output/content Several scripts directly embed user-provided contact details (e.g., `name`, `company`, `role`, `interaction`, `note`, `message`) into generated markdown files or console output using f-strings. If a malicious user (via a compromised LLM) provides input containing markdown formatting (e.g., `[link](javascript:alert(1))`, ``, `**bold**`), this markdown will be rendered when the output is displayed by a markdown viewer or the LLM itself. This could lead to visual defacement, phishing attempts, or Cross-Site Scripting (XSS) if the rendering environment supports it. When embedding user-provided strings into markdown, escape any markdown special characters (e.g., `*`, `_`, `[`, `]`, `(`, `)`, `#`, `>`). For example, replace `[` with `\[`. | LLM | scripts/crm-index.py:100 | |
| HIGH | Markdown Injection via unsanitized user input in generated output/content Several scripts directly embed user-provided contact details (e.g., `name`, `company`, `role`, `interaction`, `note`, `message`) into generated markdown files or console output using f-strings. If a malicious user (via a compromised LLM) provides input containing markdown formatting (e.g., `[link](javascript:alert(1))`, ``, `**bold**`), this markdown will be rendered when the output is displayed by a markdown viewer or the LLM itself. This could lead to visual defacement, phishing attempts, or Cross-Site Scripting (XSS) if the rendering environment supports it. When embedding user-provided strings into markdown, escape any markdown special characters (e.g., `*`, `_`, `[`, `]`, `(`, `)`, `#`, `>`). For example, replace `[` with `\[`. | LLM | scripts/crm-query.py:300 | |
| HIGH | Markdown Injection via unsanitized user input in generated output/content Several scripts directly embed user-provided contact details (e.g., `name`, `company`, `role`, `interaction`, `note`, `message`) into generated markdown files or console output using f-strings. If a malicious user (via a compromised LLM) provides input containing markdown formatting (e.g., `[link](javascript:alert(1))`, ``, `**bold**`), this markdown will be rendered when the output is displayed by a markdown viewer or the LLM itself. This could lead to visual defacement, phishing attempts, or Cross-Site Scripting (XSS) if the rendering environment supports it. When embedding user-provided strings into markdown, escape any markdown special characters (e.g., `*`, `_`, `[`, `]`, `(`, `)`, `#`, `>`). For example, replace `[` with `\[`. | LLM | scripts/crm-remind.py:200 | |
| HIGH | Markdown Injection via unsanitized user input in generated output/content Several scripts directly embed user-provided contact details (e.g., `name`, `company`, `role`, `interaction`, `note`, `message`) into generated markdown files or console output using f-strings. If a malicious user (via a compromised LLM) provides input containing markdown formatting (e.g., `[link](javascript:alert(1))`, ``, `**bold**`), this markdown will be rendered when the output is displayed by a markdown viewer or the LLM itself. This could lead to visual defacement, phishing attempts, or Cross-Site Scripting (XSS) if the rendering environment supports it. When embedding user-provided strings into markdown, escape any markdown special characters (e.g., `*`, `_`, `[`, `]`, `(`, `)`, `#`, `>`). For example, replace `[` with `\[`. | LLM | scripts/crm-update.py:130 | |
| HIGH | Arbitrary file write leading to data exfiltration The `crm-export.py` and `crm-index.py` scripts allow users to specify an arbitrary output file path for exported data (`--csv`, `--vcard`, `--markdown` for `crm-export.py` and `--output` for `crm-index.py`). A compromised LLM could instruct these scripts to write sensitive CRM data to a location outside the skill's intended data directory, such as a web-accessible directory (`/var/www/html/`) or a network share, facilitating data exfiltration. Restrict output paths to a designated, secure directory within the skill's sandbox. Validate that the provided path does not contain directory traversal sequences (`..`) and is not an absolute path outside the allowed scope. | LLM | scripts/crm-export.py:80 | |
| HIGH | Arbitrary file write leading to data exfiltration The `crm-export.py` and `crm-index.py` scripts allow users to specify an arbitrary output file path for exported data (`--csv`, `--vcard`, `--markdown` for `crm-export.py` and `--output` for `crm-index.py`). A compromised LLM could instruct these scripts to write sensitive CRM data to a location outside the skill's intended data directory, such as a web-accessible directory (`/var/www/html/`) or a network share, facilitating data exfiltration. Restrict output paths to a designated, secure directory within the skill's sandbox. Validate that the provided path does not contain directory traversal sequences (`..`) and is not an absolute path outside the allowed scope. | LLM | scripts/crm-index.py:220 | |
| HIGH | Arbitrary file read leading to data exfiltration The `crm-import.py` script reads content from a user-specified file path (e.g., `contacts.csv`, `contacts.vcf`). A compromised LLM could instruct this script to "import" sensitive system files (e.g., `/etc/passwd`, `/root/.ssh/id_rsa`). While the script attempts to parse these as contact data, it would effectively read their content and potentially store parts of it within the CRM's markdown files, making the sensitive data more accessible for subsequent exfiltration or viewing through other CRM functions. Restrict input file paths to a designated, secure directory within the skill's sandbox. Validate that the provided path does not contain directory traversal sequences (`..`) and is not an absolute path outside the allowed scope. | LLM | scripts/crm-import.py:100 |
Scan History
Embed Code
[](https://skillshield.io/report/f824f7851540892b)
Powered by SkillShield