Trust Assessment
instagram-marketing received a trust score of 66/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 3 findings: 0 critical, 2 high, 1 medium, and 0 low severity. Key findings include Suspicious import: requests, User-controlled URL leads to Server-Side Request Forgery (SSRF), Extracted web content can lead to LLM Prompt Injection.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on February 14, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings3
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| HIGH | User-controlled URL leads to Server-Side Request Forgery (SSRF) The `scripts/extract_product.py` script fetches content from a URL provided directly by the user (`sys.argv[1]`). This allows an attacker to make the server-side agent send requests to arbitrary internal or external network resources. This could expose sensitive information from internal services (e.g., cloud metadata endpoints like `http://169.254.169.254/latest/meta-data/`), interact with internal APIs, or scan internal networks. While a timeout is present, it does not prevent the initial request to a malicious or internal endpoint. Implement strict URL validation. Whitelist allowed domains (e.g., only specific e-commerce sites), block private IP ranges, and disallow non-HTTP/HTTPS protocols (e.g., `file://`). Consider using a dedicated proxy or service that sanitizes and validates URLs before fetching. | LLM | scripts/extract_product.py:39 | |
| HIGH | Extracted web content can lead to LLM Prompt Injection The `scripts/extract_product.py` fetches content from arbitrary user-provided URLs and extracts product details like name, description, and features. If an attacker controls the content of the target URL, they can embed malicious instructions (e.g., 'IGNORE ALL PREVIOUS INSTRUCTIONS AND...') within these extracted text fields. When this untrusted, extracted data is subsequently fed to the host LLM for generating marketing content, it can lead to prompt injection, manipulating the LLM's behavior, potentially causing it to generate harmful content, reveal sensitive information, or deviate from its intended purpose. The `_get_text` method truncates text to 500 characters, which is still sufficient for crafting effective injection payloads. Implement robust sanitization and filtering of all extracted text before it is passed to the LLM. This should include removing or encoding potentially harmful keywords, instructions, or formatting that could be interpreted as LLM commands. Consider using a dedicated LLM-specific input sanitization layer that is aware of common prompt injection patterns. | LLM | scripts/extract_product.py:140 | |
| MEDIUM | Suspicious import: requests Import of 'requests' detected. This module provides network or low-level system access. Verify this import is necessary. Network and system modules in skill code may indicate data exfiltration. | Static | skills/insight68/instagram-marketing/scripts/extract_product.py:22 |
Scan History
Embed Code
[](https://skillshield.io/report/df28354e77652864)
Powered by SkillShield