Trust Assessment
python-backend received a trust score of 65/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 3 findings: 1 critical, 2 high, 0 medium, and 0 low severity. Key findings include Skill declares 'Bash' permission, enabling arbitrary command execution, Hardcoded database credentials in example code, Deserialization of untrusted data via `pickle.load()`.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 40/100, indicating areas for improvement.
Last analyzed on February 12, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings3
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Hardcoded database credentials in example code The example database connection string `postgresql+asyncpg://user:pass@localhost/db` contains hardcoded username and password. This is a severe security vulnerability as it exposes sensitive credentials directly in the code, making them easily discoverable and exploitable. While this is an example, it sets a poor security precedent. Replace hardcoded credentials with environment variables, a secure configuration management system (e.g., HashiCorp Vault, AWS Secrets Manager), or a dedicated secrets file that is not committed to version control. Update the example to reflect secure credential handling. | LLM | SKILL.md:56 | |
| HIGH | Skill declares 'Bash' permission, enabling arbitrary command execution The skill's manifest explicitly grants 'Bash' permission, allowing the AI agent to execute arbitrary shell commands. This significantly increases the attack surface for command injection, data exfiltration, and system compromise if the agent is prompted maliciously to use this capability. Review if 'Bash' permission is strictly necessary for the skill's intended functionality. If not, remove it. If required, implement strict input validation and sanitization for any user-provided input that might be passed to shell commands. | LLM | SKILL.md:1 | |
| HIGH | Deserialization of untrusted data via `pickle.load()` The skill loads a machine learning model using `pickle.load()`. Deserializing data from untrusted or potentially malicious sources using `pickle` can lead to arbitrary code execution, allowing an attacker to perform command injection or data exfiltration by providing a specially crafted malicious pickle file. Avoid using `pickle` for deserializing data from untrusted or potentially compromised sources. Consider safer alternatives like ONNX, PMML, or custom serialization formats (e.g., JSON, Protocol Buffers) with strict schema validation. If `pickle` must be used, ensure the source of `model.pkl` is absolutely trusted and integrity-checked. | LLM | SKILL.md:181 |
Scan History
Embed Code
[](https://skillshield.io/report/08bc7d8db48fc47c)
Powered by SkillShield