Trust Assessment
dns-lookup received a trust score of 95/100, placing it in the Trusted category. This skill has passed all critical security checks and demonstrates strong security practices.
SkillShield's automated analysis identified 1 finding: 0 critical, 0 high, 1 medium, and 0 low severity. Key findings include Potential Command Injection via `dig` utility.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on February 13, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings1
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| MEDIUM | Potential Command Injection via `dig` utility The skill exposes the `dig` command-line utility, which is executed as a shell command. If the AI agent constructs these commands using unsanitized user-provided input (e.g., a hostname), an attacker could inject arbitrary shell commands. For example, providing input like `example.com; malicious_command` could lead to the execution of `malicious_command` if the LLM uses unsafe execution methods (e.g., `shell=True` in Python's `subprocess`). Ensure that any user-provided input passed to the `dig` command is properly sanitized and escaped to prevent shell metacharacters from being interpreted as commands. When executing external commands, prefer using methods that pass arguments as a list (e.g., `subprocess.run(['dig', hostname, 'A', '+short'])`) rather than a single shell string with `shell=True`. | LLM | SKILL.md:11 |
Scan History
Embed Code
[](https://skillshield.io/report/911fba37e852f459)
Powered by SkillShield