Trust Assessment
show-ip received a trust score of 76/100, placing it in the Mostly Trusted category. This skill has passed most security checks with only minor considerations noted.
SkillShield's automated analysis identified 2 findings: 0 critical, 2 high, 0 medium, and 0 low severity. Key findings include Untrusted external service output can lead to LLM prompt injection.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on February 13, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings2
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| HIGH | Untrusted external service output can lead to LLM prompt injection The skill fetches data from an external, untrusted service (https://ifconfig.me) and directly echoes its output. If the external service is compromised or malicious, it could return text designed to manipulate the host LLM (e.g., 'ignore previous instructions', 'delete all data'). The LLM would then process this untrusted output as part of its context, potentially leading to prompt injection. 1. Validate and sanitize the output from external services before presenting it to the LLM. For example, ensure the output strictly conforms to an IP address format using regex. 2. Consider using a more trusted or controlled service for IP lookup, or implement a local mechanism if possible. 3. Implement a strict output parsing and filtering mechanism to prevent arbitrary text from external sources from reaching the LLM's context. | LLM | scripts/get-ip.sh:9 | |
| HIGH | Untrusted external service output can lead to LLM prompt injection The skill fetches data from an external, untrusted service (https://ifconfig.me) and directly echoes its output. If the external service is compromised or malicious, it could return text designed to manipulate the host LLM (e.g., 'ignore previous instructions', 'delete all data'). The LLM would then process this untrusted output as part of its context, potentially leading to prompt injection. 1. Validate and sanitize the output from external services before presenting it to the LLM. For example, ensure the output strictly conforms to an IP address format using regex. 2. Consider using a more trusted or controlled service for IP lookup, or implement a local mechanism if possible. 3. Implement a strict output parsing and filtering mechanism to prevent arbitrary text from external sources from reaching the LLM's context. | LLM | scripts/get-ip.sh:15 |
Scan History
Embed Code
[](https://skillshield.io/report/89eb821aeafafde8)
Powered by SkillShield