Trust Assessment
excel-automation received a trust score of 73/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 2 findings: 0 critical, 2 high, 0 medium, and 0 low severity. Key findings include Potential Command Injection via User-Controlled VBA Macro Execution, Excessive Permissions and Arbitrary File Operations.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on February 13, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings2
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| HIGH | Potential Command Injection via User-Controlled VBA Macro Execution The skill explicitly states it will 'generate xlwings code and execute it,' and provides examples of running VBA macros using `wb.macro('MacroName')()`. If the `MacroName` or its arguments are derived from untrusted user input, a malicious user could instruct the LLM to execute a harmful VBA macro. VBA macros have extensive capabilities, including executing system commands, manipulating files, and making network requests, leading to arbitrary command execution on the host system. This is a direct consequence of the `code_execution` tool and the `xlwings` library's power. Implement strict validation and sanitization of any user-provided input intended for macro names or arguments. Consider a whitelist of allowed macro names or restrict the LLM's ability to generate arbitrary macro names from user input. If possible, execute macros in a sandboxed environment or with reduced permissions. | LLM | SKILL.md:160 | |
| HIGH | Excessive Permissions and Arbitrary File Operations The skill declares `computer`, `code_execution`, and `file_operations` tools in its manifest. The examples demonstrate extensive file system interaction, including opening arbitrary Excel files (`xw.Book('path/to/file.xlsx')`, `app.books.open(str(file))`) and saving files to user-specified paths (`summary_wb.save(output_path)`). If file paths or names are directly influenced by untrusted user input, a malicious user could instruct the LLM to read, overwrite, or delete sensitive files on the system, or to open malicious Excel files. The combination of these broad permissions poses a significant risk. Implement strict validation and sanitization of all user-provided file paths and names. Restrict file operations to specific, whitelisted directories or file types. Avoid allowing the LLM to construct arbitrary file paths from untrusted input. Consider using a sandboxed environment for file operations. | LLM | SKILL.md:20 |
Scan History
Embed Code
[](https://skillshield.io/report/5c060cbb228f9719)
Powered by SkillShield