Trust Assessment
aws-agentcore-langgraph received a trust score of 65/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 3 findings: 2 critical, 1 high, 0 medium, and 0 low severity. Key findings include Command Injection via unsanitized AGENT_ID in shell script, Command Injection via unsanitized MEMORY_ID in shell script, Prompt Injection vulnerability in agent entrypoint.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 25/100, indicating areas for improvement.
Last analyzed on February 13, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings3
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Command Injection via unsanitized AGENT_ID in shell script The `agent-details.sh` script directly interpolates the user-provided `AGENT_ID` argument into an `aws` CLI command without proper sanitization or robust quoting. An attacker can inject arbitrary shell commands by providing a malicious `AGENT_ID` (e.g., `my-agent; rm -rf /`). This allows for arbitrary code execution on the host where the script is run, potentially leading to data exfiltration, system compromise, or denial of service. Sanitize or validate the `AGENT_ID` input to ensure it only contains allowed characters (e.g., alphanumeric, hyphens, underscores). For this specific use case, strict validation (e.g., a regex like `^[a-zA-Z0-9_-]+$`) is recommended. If the input must contain special characters, use `printf %q` or a similar robust quoting mechanism to prevent shell interpretation. | LLM | scripts/agent-details.sh:14 | |
| CRITICAL | Command Injection via unsanitized MEMORY_ID in shell script The `memory-details.sh` script directly interpolates the user-provided `MEMORY_ID` argument into an `aws` CLI command without proper sanitization or robust quoting. Similar to the `agent-details.sh` script, an attacker can inject arbitrary shell commands by providing a malicious `MEMORY_ID` (e.g., `my-memory; evil_command`). This allows for arbitrary code execution on the host where the script is run, posing a significant security risk. Sanitize or validate the `MEMORY_ID` input to ensure it only contains allowed characters (e.g., alphanumeric, hyphens, underscores). For this specific use case, strict validation (e.g., a regex like `^[a-zA-Z0-9_-]+$`) is recommended. If the input must contain special characters, use `printf %q` or a similar robust quoting mechanism to prevent shell interpretation. | LLM | scripts/memory-details.sh:14 | |
| HIGH | Prompt Injection vulnerability in agent entrypoint The `invoke` function in the Quick Start example within `SKILL.md` directly passes user-controlled input (`payload.get("prompt", "")`) as a 'user' message to the LangGraph `graph.invoke` method. This design makes the agent susceptible to prompt injection attacks. A malicious user could craft prompts to manipulate the LLM's behavior, override its system instructions, or potentially misuse connected tools (such as `GatewayToolClient` or other agent capabilities) to exfiltrate data, perform unauthorized actions, or bypass security controls. Implement robust prompt injection defenses. This may include: (1) Input sanitization and validation of user prompts, (2) using a separate 'jailbreak' detection LLM, (3) re-prompting techniques (e.g., sandwiching the user prompt between system instructions), (4) strict access control and validation of arguments for any tools the LLM can invoke, (5) limiting the LLM's ability to modify its own system prompts, and (6) ensuring sensitive data is not accessible to the LLM without explicit, validated requests. | LLM | SKILL.md:29 |
Scan History
Embed Code
[](https://skillshield.io/report/399e442702b59e90)
Powered by SkillShield