Trust Assessment
lybic cloud-computer skill received a trust score of 65/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 3 findings: 1 critical, 1 high, 0 medium, and 1 low severity. Key findings include Arbitrary Code/Command Execution in Sandbox, Potential Data Exfiltration from Sandbox, Unpinned `lybic` dependency.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 53/100, indicating areas for improvement.
Last analyzed on February 13, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings3
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Arbitrary Code/Command Execution in Sandbox The skill explicitly allows the execution of arbitrary code and shell commands within the Lybic sandbox via the `client.sandbox.execute_process` method. If the LLM is prompted by a malicious user to execute untrusted code or commands, this can lead to full compromise of the sandbox environment, including data manipulation, resource abuse, and further attacks. The `executable`, `args`, and `stdinBase64` parameters are direct vectors for injection. Implement strict input validation and sanitization for any user-provided input that is passed to `executable`, `args`, or `stdinBase64` parameters of `execute_process`. Consider a whitelist of allowed executables and arguments, or a secure sandboxed execution environment that prevents arbitrary system calls. For LLM agents, this means carefully designing prompts and tool definitions to limit the LLM's ability to construct malicious commands from user input. | LLM | SKILL.md:34 | |
| HIGH | Potential Data Exfiltration from Sandbox The skill provides capabilities to read files within the sandbox (via `execute_process` with commands like `cat`) and to create public HTTP port mappings (`client.sandbox.create_http_port_mapping`). A malicious actor could combine these features to exfiltrate sensitive data from the sandbox environment by reading files and then serving their content through a publicly accessible HTTP endpoint. This also includes the potential for credential harvesting if API keys or other secrets are accessible within the sandbox environment. Implement strict access controls and monitoring within the sandbox environment to prevent unauthorized file access. For HTTP port mappings, ensure that only necessary ports are exposed and that the target endpoints are controlled. Limit the LLM's ability to construct arbitrary file paths or network configurations from untrusted user input. | LLM | SKILL.md:37 | |
| LOW | Unpinned `lybic` dependency The skill's documentation and manifest recommend installing the `lybic` Python package without specifying a version (`pip install lybic`). This can lead to supply chain risks if a future version of the `lybic` package introduces vulnerabilities or breaking changes, or if a malicious package is published under the same name. Pin the `lybic` dependency to a specific, known-good version (e.g., `pip install lybic==X.Y.Z`) to ensure deterministic builds and reduce the risk of unexpected changes or malicious updates. Regularly review and update pinned dependencies. | LLM | SKILL.md:47 |
Scan History
Embed Code
[](https://skillshield.io/report/a1b9cce3eae487d1)
Powered by SkillShield