Security Audit
hugging-face-jobs
github.com/sickn33/antigravity-awesome-skillsTrust Assessment
hugging-face-jobs received a trust score of 85/100, placing it in the Mostly Trusted category. This skill has passed most security checks with only minor considerations noted.
SkillShield's automated analysis identified 1 finding: 0 critical, 1 high, 0 medium, and 0 low severity. Key findings include Potential Data Exfiltration via Local File Read.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on February 20, 2026 (commit e36d6fd3). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings1
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| HIGH | Potential Data Exfiltration via Local File Read The skill documentation provides an example of reading a local file using `pathlib.Path().read_text()` to obtain script content for a Hugging Face job. While the example uses a fixed path (`hf-jobs/scripts/foo.py`), this pattern demonstrates a capability that, if generalized by the host LLM to accept user-controlled file paths, could lead to arbitrary file reads from the agent's local filesystem. The content of such files could then be exfiltrated to the remote Hugging Face job environment. Instruct the host LLM to strictly avoid reading local files from its environment based on user input. If local files must be used, implement a strict allowlist for file paths or ensure that the `script` parameter for `hf_jobs()` only accepts inline string literals or URLs, not content derived from arbitrary local file reads. The skill documentation should explicitly warn against using user-controlled paths for local file reads. | LLM | SKILL.md:258 |
Scan History
Embed Code
[](https://skillshield.io/report/244e185fc60ced9b)
Powered by SkillShield