Security Audit
temporal-python-testing
github.com/sickn33/antigravity-awesome-skillsTrust Assessment
temporal-python-testing received a trust score of 85/100, placing it in the Mostly Trusted category. This skill has passed most security checks with only minor considerations noted.
SkillShield's automated analysis identified 1 finding: 0 critical, 1 high, 0 medium, and 0 low severity. Key findings include Untrusted instruction to load local files.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on February 20, 2026 (commit e36d6fd3). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings1
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| HIGH | Untrusted instruction to load local files The skill's `SKILL.md` contains direct instructions for the host LLM to 'open' or 'Load' specific `.md` files from the `resources/` directory. Since the entire `SKILL.md` content is marked as untrusted input, these instructions represent a prompt injection attempt. While the currently referenced files are internal and seemingly benign, this pattern demonstrates that the skill can instruct the LLM to perform file system operations based on untrusted input. If the underlying file access tool is not strictly sandboxed to only allow access to predefined, safe paths, a malicious actor could potentially manipulate these instructions to read arbitrary files (Data Exfiltration) or execute commands (Command Injection) if the tool allows for path traversal or command execution. Implement strict sandboxing for any file access tools used by the LLM, ensuring they can only access whitelisted paths or are explicitly invoked by trusted code. Avoid embedding direct instructions for file operations within untrusted skill content. Instead, define required resources and their loading mechanisms explicitly in a trusted manifest or skill definition, allowing the agent's runtime to handle them securely. | LLM | SKILL.md:20 |
Scan History
Embed Code
[](https://skillshield.io/report/97fc783e2399af4b)
Powered by SkillShield