Security Audit
mrdulasolutions/exchekskills:exchek-docx
github.com/mrdulasolutions/exchekskillsTrust Assessment
mrdulasolutions/exchekskills:exchek-docx received a trust score of 55/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 4 findings: 1 critical, 1 high, 1 medium, and 1 low severity. Key findings include Unpinned npm dependency version, Node lockfile missing, Potential Command Injection via Unsanitized File Path.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 55/100, indicating areas for improvement.
Last analyzed on March 18, 2026 (commit c49adb39). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings4
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Potential Command Injection via Unsanitized File Path The skill instructs the agent to execute a `node` command with a user-provided file path (`<full-path-to-report.md>`). If the agent directly inserts an untrusted, unsanitized user input into this command, it could lead to command injection. A malicious user could craft a path that includes shell metacharacters (e.g., `;`, `&`, `|`, `$()`) to execute arbitrary commands on the host system. Implement robust input validation and sanitization for all user-provided file paths before they are used in shell commands. Consider using a dedicated tool execution environment that prevents arbitrary shell command execution, or ensure the agent's execution environment properly escapes or quotes arguments to prevent shell interpretation. If possible, use a programmatic API for file operations instead of shelling out. | LLM | SKILL.md:59 | |
| HIGH | Potential Data Exfiltration via Arbitrary File Read The `report-to-docx.mjs` script reads the content of a markdown file specified by `process.argv[2]`, which is derived from a user-provided path. If a malicious user provides a path to a sensitive system file (e.g., `/etc/passwd`, `~/.ssh/id_rsa`, `/app/secrets.env`), the script will read its content. This content is then processed and written into a `.docx` file in the same directory. The `SKILL.md` then instructs the agent to make this `.docx` file available to the user, creating a clear path for data exfiltration. Restrict the file paths that can be accessed by the script. Implement strict validation to ensure that the provided path is within an allowed, sandboxed directory (e.g., a temporary user-specific directory) and does not contain directory traversal sequences (e.g., `../`). Avoid allowing the script to read arbitrary files from the filesystem based on untrusted input. If the script must operate on user-provided files, ensure they are uploaded to a secure, isolated location first. | LLM | scripts/report-to-docx.mjs:13 | |
| MEDIUM | Unpinned npm dependency version Dependency 'docx' is not pinned to an exact version ('^9.6.1'). Pin dependencies to exact versions to reduce drift and supply-chain risk. | Dependencies | exchek-docx/scripts/package.json | |
| LOW | Node lockfile missing package.json is present but no lockfile was found (package-lock.json, pnpm-lock.yaml, or yarn.lock). Commit a lockfile for deterministic dependency resolution. | Dependencies | exchek-docx/scripts/package.json |
Scan History
Embed Code
[](https://skillshield.io/report/8f0c844db73f1dae)
Powered by SkillShield