Trust Assessment
slack-context-memory received a trust score of 48/100, placing it in the Untrusted category. This skill has significant security findings that require attention before use in production.
SkillShield's automated analysis identified 7 findings: 0 critical, 3 high, 2 medium, and 2 low severity. Key findings include Unsafe deserialization / dynamic eval, Unpinned npm dependency version, Node lockfile missing.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 53/100, indicating areas for improvement.
Last analyzed on February 14, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings7
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| HIGH | Prompt Injection via User Message Content The skill constructs LLM prompts by directly embedding user-generated Slack message text (`msg.text`) without sufficient sanitization or separation. A malicious user could craft messages containing instructions designed to manipulate the summarization LLM, potentially leading to the generation of misleading, harmful, or unexpected summaries, or attempts to extract sensitive information from the LLM's context. While the prompt attempts to constrain the LLM to JSON output, a sophisticated injection could still influence the content within that JSON. Implement robust sanitization or filtering of user-generated `msg.text` before embedding it into the LLM prompt. Consider using explicit XML-like tags or other delimiters to clearly separate user input from system instructions within the prompt, along with strong instructions to the LLM to strictly adhere to these delimiters and ignore any conflicting instructions within the user content. Alternatively, use a separate LLM call to 'clean' or rephrase user input before passing it to the summarization LLM. | LLM | summarize-conversation.js:39 | |
| HIGH | Prompt Injection via User Message Content The skill constructs LLM prompts by directly embedding user-generated Slack message text (`msg.text`) without sufficient sanitization or separation. A malicious user could craft messages containing instructions designed to manipulate the summarization LLM, potentially leading to the generation of misleading, harmful, or unexpected summaries, or attempts to extract sensitive information from the LLM's context. Although `response_format: { type: 'json_object' }` provides a strong constraint, the content within the JSON could still be influenced. Implement robust sanitization or filtering of user-generated `msg.text` before embedding it into the LLM prompt. Consider using explicit XML-like tags or other delimiters to clearly separate user input from system instructions within the prompt, along with strong instructions to the LLM to strictly adhere to these delimiters and ignore any conflicting instructions within the user content. Alternatively, use a separate LLM call to 'clean' or rephrase user input before passing it to the summarization LLM. | LLM | summarize-openai.js:20 | |
| HIGH | Prompt Injection via User Message Content The skill constructs LLM prompts by directly embedding user-generated Slack message text (`msg.text`) without sufficient sanitization or separation. A malicious user could craft messages containing instructions designed to manipulate the summarization LLM, potentially leading to the generation of misleading, harmful, or unexpected summaries, or attempts to extract sensitive information from the LLM's context. While the prompt attempts to constrain the LLM to JSON output, a sophisticated injection could still influence the content within that JSON. Implement robust sanitization or filtering of user-generated `msg.text` before embedding it into the LLM prompt. Consider using explicit XML-like tags or other delimiters to clearly separate user input from system instructions within the prompt, along with strong instructions to the LLM to strictly adhere to these delimiters and ignore any conflicting instructions within the user content. Alternatively, use a separate LLM call to 'clean' or rephrase user input before passing it to the summarization LLM. | LLM | test-and-post.js:109 | |
| MEDIUM | Unsafe deserialization / dynamic eval Decryption followed by code execution Remove obfuscated code execution patterns. Legitimate code does not need base64-encoded payloads executed via eval, encrypted-then-executed blobs, or dynamic attribute resolution to call system functions. | Manifest | skills/davidrudduck/slack-context-memory/search-conversations.js:304 | |
| MEDIUM | Unpinned npm dependency version Dependency 'pg' is not pinned to an exact version ('^8.13.1'). Pin dependencies to exact versions to reduce drift and supply-chain risk. | Dependencies | skills/davidrudduck/slack-context-memory/package.json | |
| LOW | Node lockfile missing package.json is present but no lockfile was found (package-lock.json, pnpm-lock.yaml, or yarn.lock). Commit a lockfile for deterministic dependency resolution. | Dependencies | skills/davidrudduck/slack-context-memory/package.json | |
| LOW | Unpinned Major Dependencies The `package.json` specifies dependencies using caret (`^`) ranges (e.g., `^8.13.1`). While this allows for minor and patch updates, it means that the exact version of a dependency is not strictly pinned. This could potentially introduce unexpected behavior or vulnerabilities if a new minor version contains breaking changes or security flaws not present in the originally tested version. For maximum security and reproducibility, exact version pinning is recommended. Pin all dependencies to exact versions (e.g., `"pg": "8.13.1"`) to ensure consistent builds and prevent unexpected changes from upstream packages. Regularly audit and manually update dependencies to incorporate security fixes. | LLM | package.json:13 |
Scan History
Embed Code
[](https://skillshield.io/report/bdcfe012e9b2b379)
Powered by SkillShield