Trust Assessment
meeting-notes received a trust score of 69/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 4 findings: 0 critical, 2 high, 1 medium, and 1 low severity. Key findings include Missing required field: name, Node lockfile missing, Potential Prompt Injection via 'content' parameter.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on February 13, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings4
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| HIGH | Potential Prompt Injection via 'content' parameter The `generateNotes` function accepts a `content` string, which is highly likely to be user-provided input intended for processing by an LLM to generate meeting notes. If this `content` is directly incorporated into an LLM prompt without robust sanitization or strict templating, a malicious user could inject instructions to manipulate the LLM's behavior. This could lead to unintended actions, information disclosure, or denial of service. Although the function body is empty, the skill's name ('Meeting Notes Generator') and the parameter's name strongly suggest LLM interaction. Implement robust prompt templating to strictly separate user input from system instructions. Sanitize or escape user-provided `content` before passing it to the LLM. Consider using LLM-specific input sanitization libraries or techniques to mitigate prompt injection risks. | LLM | src/index.ts:2 | |
| HIGH | Potential API Key Exposure via Prompt Injection The `generateNotes` function accepts an optional `apiKey` parameter, which is likely intended for authenticating with an external LLM service. If the `content` parameter is vulnerable to prompt injection (as described in SS-LLM-001), a malicious prompt could potentially coerce the LLM into revealing the `apiKey` or using it for unauthorized actions. This constitutes a significant credential harvesting and data exfiltration risk. Passing sensitive credentials directly as function arguments is generally discouraged. Avoid passing sensitive credentials like API keys directly as function arguments. Instead, retrieve them from secure environment variables or a dedicated secrets management service within the function. Ensure that the LLM's access to internal variables or context is strictly limited and that no sensitive information is exposed to the LLM's output or internal state. | LLM | src/index.ts:2 | |
| MEDIUM | Missing required field: name The 'name' field is required for claude_code skills but is missing from frontmatter. Add a 'name' field to the SKILL.md frontmatter. | Static | skills/user520512/meeting-notes/SKILL.md:1 | |
| LOW | Node lockfile missing package.json is present but no lockfile was found (package-lock.json, pnpm-lock.yaml, or yarn.lock). Commit a lockfile for deterministic dependency resolution. | Dependencies | skills/user520512/meeting-notes/package.json |
Scan History
Embed Code
[](https://skillshield.io/report/9fb46d81184439bc)
Powered by SkillShield