Security Audit
pymc-labs/agent-skills:skills/marimo-notebooks
github.com/pymc-labs/agent-skillsTrust Assessment
pymc-labs/agent-skills:skills/marimo-notebooks received a trust score of 35/100, placing it in the Untrusted category. This skill has significant security findings that require attention before use in production.
SkillShield's automated analysis identified 4 findings: 1 critical, 2 high, 1 medium, and 0 low severity. Key findings include Arbitrary command execution, Unsafe deserialization / dynamic eval, Dangerous call: subprocess.run().
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The Manifest Analysis layer scored lowest at 55/100, indicating areas for improvement.
Last analyzed on April 1, 2026 (commit 64e299da). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings4
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Arbitrary command execution Python shell execution (os.system, subprocess) Review all shell execution calls. Ensure commands are static (not built from user input), use absolute paths, and are strictly necessary. Prefer library APIs over shell commands. | Manifest | skills/marimo-notebooks/scripts/convert_notebook.py:25 | |
| HIGH | Unsafe deserialization / dynamic eval Decryption followed by code execution Remove obfuscated code execution patterns. Legitimate code does not need base64-encoded payloads executed via eval, encrypted-then-executed blobs, or dynamic attribute resolution to call system functions. | Manifest | skills/marimo-notebooks/assets/data_analysis_template.py:20 | |
| HIGH | Dangerous call: subprocess.run() Call to 'subprocess.run()' detected in function 'convert_jupyter_to_marimo'. This can execute arbitrary code. Avoid using dangerous functions like exec/eval/os.system. Use safer alternatives. | Static | skills/marimo-notebooks/scripts/convert_notebook.py:25 | |
| MEDIUM | Potential data exfiltration via file upload component The `data_analysis_template.py` uses `mo.ui.file` to allow users (or the agent acting as a user) to upload files. The contents of the uploaded file are read into memory (`file_input.contents()`) and then processed and partially displayed in the notebook output (`df.head(100)`). An attacker could craft a prompt instructing the agent to use this template, 'upload' a sensitive local file (e.g., `/etc/passwd` or a credential file accessible to the agent), and then capture the displayed contents from the notebook's output, leading to data exfiltration. If the agent is intended to handle user-uploaded data, ensure strict validation and sanitization of file contents before processing or displaying. For sensitive environments, restrict the agent's ability to 'upload' arbitrary local files or ensure that the agent's execution environment is sandboxed to prevent access to sensitive system files. If displaying data, consider redacting sensitive information or limiting the amount of data shown. | LLM | assets/data_analysis_template.py:30 |
Scan History
Embed Code
[](https://skillshield.io/report/80c1f4f3046be89a)
Powered by SkillShield