Security Audit
azure-storage-file-share-py
github.com/sickn33/antigravity-awesome-skillsTrust Assessment
azure-storage-file-share-py received a trust score of 85/100, placing it in the Mostly Trusted category. This skill has passed most security checks with only minor considerations noted.
SkillShield's automated analysis identified 2 findings: 0 critical, 0 high, 2 medium, and 0 low severity. Key findings include Potential shell command execution via installation instructions, Potential for arbitrary local file access and data exfiltration/tampering.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on February 20, 2026 (commit e36d6fd3). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings2
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| MEDIUM | Potential shell command execution via installation instructions The skill documentation includes a `pip install` command. If an LLM agent interprets this as an instruction to execute in a shell environment, it could lead to command injection. While this is an installation step, executing arbitrary shell commands from untrusted skill content is a security risk. Advise against direct execution of `pip install` commands from skill documentation by the agent. Ensure agent execution environments are sandboxed and do not automatically execute shell commands found in documentation. If dependencies are needed, they should be declared in a `requirements.txt` or similar, and installed by a trusted system. | LLM | SKILL.md:8 | |
| MEDIUM | Potential for arbitrary local file access and data exfiltration/tampering The skill documentation provides examples for uploading files from and downloading files to the local filesystem using `open()`. If an LLM agent generates code based on these examples where file paths are derived from untrusted input, it could lead to reading arbitrary local files and uploading them to Azure Storage (data exfiltration), or writing to arbitrary local files (data tampering/corruption). When generating code that interacts with the local filesystem, ensure that file paths are strictly validated and constrained to authorized directories, or that explicit user consent is obtained for file operations. Avoid allowing untrusted input to directly specify file paths. Implement robust sandboxing for file system access. | LLM | SKILL.md:107 |
Scan History
Embed Code
[](https://skillshield.io/report/2bb70f990d43e2df)
Powered by SkillShield