Trust Assessment
hf-mcp received a trust score of 65/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 2 findings: 1 critical, 1 high, 0 medium, and 0 low severity. Key findings include Arbitrary Code/Command Execution via `hf_jobs`, Potential Data Exfiltration and Credential Harvesting via `hf_jobs`.
The analysis covered 4 layers: dependency_graph, manifest_analysis, llm_behavioral_safety, static_code_analysis. The llm_behavioral_safety layer scored lowest at 55/100, indicating areas for improvement.
Last analyzed on February 11, 2026 (commit 3f4f55d6). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings2
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Arbitrary Code/Command Execution via `hf_jobs` The `hf_jobs` tool, as demonstrated in the skill examples, allows for the execution of arbitrary Python scripts (`operation="uv"`) and shell commands (`operation="run"`). A malicious user could craft a prompt that induces the LLM to generate a call to `hf_jobs` with attacker-controlled code or commands. This could lead to full compromise of the execution environment where the job runs, including data exfiltration, credential harvesting, or denial of service. Implement strict input validation and sanitization for any user-provided content that could be passed to the `script` or `command` arguments of `hf_jobs`. Consider sandboxing the execution environment for `hf_jobs` to limit its access to system resources and sensitive data. Ensure the LLM is robustly guarded against prompt injection attempts that aim to generate malicious code for these functions. If possible, restrict the types of commands or scripts that can be executed, or require explicit user confirmation for code execution. | Unknown | SKILL.md:79 | |
| HIGH | Potential Data Exfiltration and Credential Harvesting via `hf_jobs` As a direct consequence of the arbitrary code/command execution vulnerability in `hf_jobs`, an attacker could inject code designed to exfiltrate sensitive data from the execution environment. The example explicitly shows passing `HF_TOKEN` as a secret to the job (`secrets: {"HF_TOKEN": "$HF_TOKEN"}`). An injected malicious script or command could read environment variables, local files (e.g., `/etc/passwd`, SSH keys), or the passed `HF_TOKEN`, and then transmit this data to an external server controlled by the attacker. In addition to the remediations for command injection, ensure that the execution environment for `hf_jobs` is isolated and has minimal necessary permissions. Avoid passing sensitive credentials like `HF_TOKEN` directly to jobs unless absolutely necessary and with robust security controls in place. Implement network egress filtering to prevent unauthorized data transmission from job environments. Regularly audit and monitor job execution for suspicious activity. | Unknown | SKILL.md:90 |
Scan History
Embed Code
[](https://skillshield.io/report/07722b224ca6b389)
Powered by SkillShield