Trust Assessment
nft-skill received a trust score of 58/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 4 findings: 2 critical, 1 high, 1 medium, and 0 low severity. Key findings include Unpinned npm dependency version, Prompt Injection in LLM Art Concept Generation, Prompt Injection in LLM Tweet Text Generation.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 25/100, indicating areas for improvement.
Last analyzed on February 14, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings4
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Prompt Injection in LLM Art Concept Generation The `generateArtConcept` function in `src/skills/llm.ts` directly interpolates user-provided input (`theme`) into the prompt sent to the Large Language Model (LLM). An attacker can craft a malicious `theme` string to inject new instructions, manipulate the LLM's behavior, or attempt to extract sensitive information accessible to the LLM (e.g., environment variables, internal logic). This can lead to the generation of harmful or inappropriate art concepts, or even data exfiltration if the LLM is not properly sandboxed. Implement robust input sanitization and validation for user-provided `theme` before interpolating it into the LLM prompt. Consider using a templating system or a dedicated prompt engineering library that separates user input from system instructions. Alternatively, pass user input as a separate variable to the LLM API if supported, or use a 'safe' LLM call that explicitly disallows instruction following from user input. | LLM | src/skills/llm.ts:40 | |
| CRITICAL | Prompt Injection in LLM Tweet Text Generation The `generateTweetText` function in `src/skills/llm.ts` directly interpolates user-provided input (`context` and `metadata`) into the prompt sent to the Large Language Model (LLM). An attacker can craft malicious `context` or `metadata` to inject new instructions, manipulate the LLM's behavior, or attempt to extract sensitive information. This could lead to the LLM generating tweets that are spam, phishing attempts, or contain misinformation, which are then posted to social media. Implement robust input sanitization and validation for user-provided `context` and `metadata` before interpolating them into the LLM prompt. Ensure that the LLM is instructed to only generate tweet content and not to follow external instructions embedded in the input. Consider using a templating system or a dedicated prompt engineering library that separates user input from system instructions. | LLM | src/skills/llm.ts:46 | |
| HIGH | Data Exfiltration / Abuse via Arbitrary Social Media Posts The `tweet` command in `src/cli.ts` allows an agent to directly pass arbitrary user-controlled content (`options.content`) to the `postToX` function in `src/skills/social.ts`, which then posts it to X (Twitter). While the skill intends to post structured announcements, this direct pass-through of untrusted input enables an attacker to post any message. This could be exploited to spread misinformation, engage in phishing attacks (e.g., by posting malicious links), or attempt to exfiltrate data by tricking users into interacting with malicious content. Restrict the `tweet` command to only accept structured inputs that are then used to generate predefined tweet templates, rather than arbitrary text. If arbitrary text is necessary, implement strict content moderation, URL sanitization, and potentially integrate with a service that scans for malicious links or content before posting. Ensure the agent's LLM is not susceptible to prompt injection that could lead it to generate malicious tweet content. | LLM | src/cli.ts:56 | |
| MEDIUM | Unpinned npm dependency version Dependency 'axios' is not pinned to an exact version ('^1.6.2'). Pin dependencies to exact versions to reduce drift and supply-chain risk. | Dependencies | skills/numba1ne/nft-skill/package.json |
Scan History
Embed Code
[](https://skillshield.io/report/df02426201253c51)
Powered by SkillShield