Trust Assessment
strands received a trust score of 10/100, placing it in the Untrusted category. This skill has significant security findings that require attention before use in production.
SkillShield's automated analysis identified 8 findings: 2 critical, 4 high, 2 medium, and 0 low severity. Key findings include Arbitrary command execution, Unsafe deserialization / dynamic eval, Dangerous call: __import__().
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 40/100, indicating areas for improvement.
Last analyzed on February 12, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings8
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Arbitrary command execution Python shell execution (os.system, subprocess) Review all shell execution calls. Ensure commands are static (not built from user input), use absolute paths, and are strictly necessary. Prefer library APIs over shell commands. | Manifest | skills/trippingkelsea/strands/scripts/create-agent.py:87 | |
| CRITICAL | Default Agent Scaffolding Includes Arbitrary Shell Execution Tool The `scripts/create-agent.py` script, when used to scaffold a new agent, by default includes a `run_command` tool. This tool executes arbitrary shell commands using `subprocess.run(command, shell=True, ...)`. An AI agent equipped with this tool and exposed to untrusted input could be prompted to execute malicious commands on the host system, leading to full system compromise. 1. **Remove by default**: Do not include `run_command` (or `shell`) tool by default in scaffolded agents. Make it an opt-in feature. 2. **Sandboxing**: Implement strict sandboxing for shell execution, e.g., using containers or restricted environments. 3. **Input Validation**: If shell access is necessary, rigorously validate and sanitize all command inputs from the LLM to prevent injection. Avoid `shell=True` where possible, and use `shlex.split` for arguments. 4. **Least Privilege**: Ensure the agent process runs with the absolute minimum necessary permissions. | LLM | scripts/create-agent.py:70 | |
| HIGH | Unsafe deserialization / dynamic eval Python builtins/import manipulation Remove obfuscated code execution patterns. Legitimate code does not need base64-encoded payloads executed via eval, encrypted-then-executed blobs, or dynamic attribute resolution to call system functions. | Manifest | skills/trippingkelsea/strands/tests/test_imports.py:102 | |
| HIGH | Dangerous call: __import__() Call to '__import__()' detected in function 'test_experimental_imports'. This can execute arbitrary code. Avoid using dangerous functions like exec/eval/os.system. Use safer alternatives. | Static | skills/trippingkelsea/strands/tests/test_imports.py:102 | |
| HIGH | Default Agent Scaffolding Includes Arbitrary File Read Tool The `scripts/create-agent.py` script, when used to scaffold a new agent, by default includes a `read_file` tool. This tool allows the agent to read the contents of any file on the filesystem. An AI agent equipped with this tool and exposed to untrusted input could be prompted to read sensitive files (e.g., `/etc/passwd`, AWS credentials, private keys) and exfiltrate their contents. 1. **Remove by default**: Do not include `read_file` (or `file_read`) tool by default in scaffolded agents. Make it an opt-in feature. 2. **Path Validation**: If file reading is necessary, implement strict path validation to restrict access to specific directories or file types. Prevent directory traversal attacks (e.g., `../`). 3. **Least Privilege**: Ensure the agent process runs with the absolute minimum necessary filesystem permissions. | LLM | scripts/create-agent.py:50 | |
| HIGH | Scaffolded Agents Default to Broad Filesystem and Shell Access The `scripts/create-agent.py` utility, which is part of this skill package, generates new AI agents that are configured by default with `read_file`, `write_file`, and `run_command` tools. These tools grant the agent broad capabilities to read, write, and execute arbitrary commands on the host system. While these tools can be useful, their default inclusion creates a high-risk scenario where any agent created via this script will operate with excessive permissions, making it highly vulnerable to prompt injection attacks that could lead to data exfiltration, system modification, or remote code execution. The system prompt generated also explicitly informs the agent of these capabilities. 1. **Principle of Least Privilege**: Scaffold agents with minimal tools by default. Require users to explicitly opt-in to powerful tools like file I/O and shell execution. 2. **Tool Granularity**: Provide more granular tools (e.g., `read_config_file`, `write_log_entry`) instead of generic `read_file`/`write_file`. 3. **Security Guidance**: Provide prominent warnings and best practices in the documentation for users who choose to enable these high-privilege tools, emphasizing the need for sandboxing and robust input validation. | LLM | scripts/create-agent.py:100 | |
| MEDIUM | Unsafe deserialization / dynamic eval Decryption followed by code execution Remove obfuscated code execution patterns. Legitimate code does not need base64-encoded payloads executed via eval, encrypted-then-executed blobs, or dynamic attribute resolution to call system functions. | Manifest | skills/trippingkelsea/strands/tests/test_imports.py:159 | |
| MEDIUM | Unsafe deserialization / dynamic eval Decryption followed by code execution Remove obfuscated code execution patterns. Legitimate code does not need base64-encoded payloads executed via eval, encrypted-then-executed blobs, or dynamic attribute resolution to call system functions. | Manifest | skills/trippingkelsea/strands/tests/test_imports.py:168 |
Scan History
Embed Code
[](https://skillshield.io/report/0b8088c30ade7ff1)
Powered by SkillShield