Trust Assessment
case-study-writing received a trust score of 86/100, placing it in the Mostly Trusted category. This skill has passed most security checks with only minor considerations noted.
SkillShield's automated analysis identified 1 finding: 0 critical, 1 high, 0 medium, and 0 low severity. Key findings include Excessive Bash Permissions for 'infsh' commands.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on February 12, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings1
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| HIGH | Excessive Bash Permissions for 'infsh' commands The skill declares 'Bash(infsh *)' as an allowed tool. This grants the LLM permission to execute any command starting with 'infsh'. This wildcard permission is overly broad and significantly increases the attack surface. For example, the skill itself demonstrates the use of 'infsh app run infsh/python-executor', which allows the execution of arbitrary Python code. If a malicious user prompt could trick the LLM into generating and executing arbitrary Python code via this tool, it would lead to critical command injection. Restrict Bash permissions to only the specific 'infsh' commands and arguments absolutely necessary for the skill's functionality. Avoid wildcard permissions like 'infsh *'. If 'infsh/python-executor' is truly required, implement strict input validation and sanitization to prevent arbitrary code execution, or consider if a less powerful tool can achieve the same outcome. | LLM | SKILL.md:2 |
Scan History
Embed Code
[](https://skillshield.io/report/0e748cc23d90b981)
Powered by SkillShield