Trust Assessment
landing-gen received a trust score of 80/100, placing it in the Mostly Trusted category. This skill has passed most security checks with only minor considerations noted.
SkillShield's automated analysis identified 2 findings: 0 critical, 1 high, 1 medium, and 0 low severity. Key findings include `package.json` data sent to external AI model, Execution of external `npx` package.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on February 12, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings2
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| HIGH | `package.json` data sent to external AI model The skill describes a tool (`ai-landing`) that reads the local `package.json` file and explicitly states it "sends that info to an AI model". `package.json` files can contain sensitive project metadata, including internal package names, private repository URLs, author emails, and dependency information. Transmitting this data to an unspecified external AI model without explicit user consent, clear data handling policies, or anonymization presents a significant data exfiltration risk. The user is not informed about the destination, security, or retention of this data. The skill should clearly state what data is extracted, to which specific AI service it is sent, provide a link to the service's privacy policy, and ideally, prompt the user for explicit consent before transmitting any data. Consider anonymizing or filtering sensitive fields before transmission. | LLM | SKILL.md:48 | |
| MEDIUM | Execution of external `npx` package The skill instructs the user to execute an external `npx` package (`ai-landing`). While `npx` is a legitimate tool, relying on and executing unvetted or potentially malicious third-party packages from npm introduces a supply chain risk. The `ai-landing` package could contain malicious code, perform unwanted actions, or be compromised in the future. The skill does not provide any mechanism to verify the integrity or safety of this external dependency. If the skill is meant to be executed by an LLM, the LLM environment should be sandboxed to prevent arbitrary command execution. For user-facing instructions, advise users to audit the `ai-landing` package source code or use a trusted alternative. Consider providing a local, sandboxed implementation of the core functionality if possible, rather than relying on `npx` for critical operations. | LLM | SKILL.md:13 |
Scan History
Embed Code
[](https://skillshield.io/report/1d24d634b7db7d22)
Powered by SkillShield