Trust Assessment
musallat-bot received a trust score of 58/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 3 findings: 1 critical, 1 high, 1 medium, and 0 low severity. Key findings include Missing required field: name, Hardcoded API Key in Skill Description, User Input Directly Concatenated into LLM Prompt.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 55/100, indicating areas for improvement.
Last analyzed on February 13, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings3
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Hardcoded API Key in Skill Description An API key (`AIzaSyBxfb-8s5TsOVvr55_E5lDbilpVLoSwIj8`) is directly exposed in the `skill.md` file. This allows unauthorized access to the associated API and poses a severe security risk, as anyone with access to this skill package can use the key. Remove the hardcoded API key from `skill.md`. Store API keys securely, preferably in environment variables or a dedicated secret management system, and load them at runtime. Never commit sensitive credentials directly into source code or documentation. | LLM | skill.md:15 | |
| HIGH | User Input Directly Concatenated into LLM Prompt The `musallat_engine` function constructs the LLM prompt by directly concatenating user-provided input (`prompt`) with the `system_instruction` using an f-string. This allows a malicious user to inject instructions into the LLM, potentially overriding the intended persona, manipulating its behavior, or extracting information not meant to be disclosed. Implement robust prompt sanitization or use structured input methods (e.g., separate system and user messages in a chat-based API) to prevent user input from manipulating the system instructions. Ensure user input is treated as data, not instructions, for the LLM. | LLM | skills/musallat_core.py:21 | |
| MEDIUM | Missing required field: name The 'name' field is required for claude_code skills but is missing from frontmatter. Add a 'name' field to the SKILL.md frontmatter. | Static | skills/musallat-dev/musallat-bot/skill.md:1 |
Scan History
Embed Code
[](https://skillshield.io/report/2a4008c62415947f)
Powered by SkillShield