Trust Assessment
ollama-local received a trust score of 50/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 4 findings: 1 critical, 0 high, 3 medium, and 0 low severity. Key findings include Network egress to untrusted endpoints, Suspicious import: urllib.request, Dangerous Tool Definition (Mocked Execution).
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on February 12, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings4
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Network egress to untrusted endpoints HTTP request to raw IP address Review all outbound network calls. Remove connections to webhook collectors, paste sites, and raw IP addresses. Legitimate API calls should use well-known service domains. | Manifest | skills/timverhoogt/ollama-local/SKILL.md:13 | |
| MEDIUM | Suspicious import: urllib.request Import of 'urllib.request' detected. This module provides network or low-level system access. Verify this import is necessary. Network and system modules in skill code may indicate data exfiltration. | Static | skills/timverhoogt/ollama-local/scripts/ollama.py:11 | |
| MEDIUM | Suspicious import: urllib.request Import of 'urllib.request' detected. This module provides network or low-level system access. Verify this import is necessary. Network and system modules in skill code may indicate data exfiltration. | Static | skills/timverhoogt/ollama-local/scripts/ollama_tools.py:18 | |
| MEDIUM | Dangerous Tool Definition (Mocked Execution) The `scripts/ollama_tools.py` file defines an `EXAMPLE_TOOLS` list which includes a tool named `run_code`. This tool is described as being able to 'Execute Python code and return the result'. While the current implementation of `execute_tool_call` for `run_code` only simulates execution and returns the code as part of the output (i.e., `return {"output": "Code execution simulated", "code": args.get("code", "")}`), the explicit definition of a tool for executing arbitrary code poses a significant security risk. A future maintainer could easily replace the mock implementation with actual code execution (e.g., `eval()`, `exec()`, `subprocess.run()`), leading to a critical command injection vulnerability. The presence of such a tool, even if currently mocked, signals a dangerous capability that should be removed or heavily sandboxed if ever implemented. Remove the `run_code` tool definition entirely from `EXAMPLE_TOOLS` if code execution is not intended. If code execution is a desired future feature, rename the tool to clearly indicate its mocked status (e.g., `mock_run_code`) and add prominent security warnings. Any actual implementation of code execution must be done within a highly secure, isolated, and sandboxed environment, with strict input validation and resource limits. | LLM | scripts/ollama_tools.py:60 |
Scan History
Embed Code
[](https://skillshield.io/report/04719b4a120bdaa7)
Powered by SkillShield