Trust Assessment
MemoryLayer received a trust score of 52/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 8 findings: 0 critical, 1 high, 4 medium, and 3 low severity. Key findings include Unsafe deserialization / dynamic eval, Suspicious import: requests, Unpinned npm dependency version.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on February 13, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings8
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| HIGH | Repository URL mismatch in package.json The `repository.url` in `package.json` (`https://github.com/davidhx1000-cloud/memorylayer-skill`) does not match the skill's hosting repository (`https://github.com/openclaw/skills`) as indicated by the metadata. This discrepancy can indicate that the published skill's source code might not originate from the expected or officially maintained repository, posing a significant supply chain risk. Users might expect the skill to be maintained by `openclaw` but the `package.json` points to a different, potentially untrusted, maintainer or fork. Update the `repository.url` in `package.json` to accurately reflect the actual, trusted source repository (e.g., `https://github.com/openclaw/skills/tree/main/skills/khli01/memorylayer` if it's a sub-directory, or the correct canonical repository). If it's an intentional fork, this should be clearly documented. | LLM | package.json:10 | |
| MEDIUM | Unsafe deserialization / dynamic eval Decryption followed by code execution Remove obfuscated code execution patterns. Legitimate code does not need base64-encoded payloads executed via eval, encrypted-then-executed blobs, or dynamic attribute resolution to call system functions. | Manifest | skills/khli01/memorylayer/python/memorylayer_skill.py:5 | |
| MEDIUM | Suspicious import: requests Import of 'requests' detected. This module provides network or low-level system access. Verify this import is necessary. Network and system modules in skill code may indicate data exfiltration. | Static | skills/khli01/memorylayer/python/memorylayer_skill.py:10 | |
| MEDIUM | Unpinned npm dependency version Dependency 'axios' is not pinned to an exact version ('^1.6.0'). Pin dependencies to exact versions to reduce drift and supply-chain risk. | Dependencies | skills/khli01/memorylayer/package.json | |
| MEDIUM | Unpinned Python dependency version Requirement 'requests>=2.31.0' is not pinned to an exact version. Pin Python dependencies with '==<exact version>'. | Dependencies | skills/khli01/memorylayer/python/requirements.txt:4 | |
| LOW | Unpinned Python dependency `requests` The `python/requirements.txt` file specifies `requests>=2.31.0`. Using a broad version range (`>=`) for dependencies can lead to unexpected behavior or security vulnerabilities if a future version introduces breaking changes or exploits. While `requests` is a widely used and generally trusted library, pinning to an exact version (`==`) or a more restrictive compatible version range (e.g., `requests~=2.31.0`) is recommended for production environments to ensure deterministic builds and reduce supply chain risks. Pin the `requests` dependency to an exact version (e.g., `requests==2.31.0`) or a more restrictive compatible version range (e.g., `requests~=2.31.0`) to ensure consistent builds and mitigate risks from unexpected updates. | LLM | python/requirements.txt:4 | |
| LOW | User-controlled memory content directly injected into LLM prompts The `get_context` function in `index.js` constructs a string for LLM prompt injection by directly embedding user-provided memory `content`. If a malicious actor can store memories (e.g., through a compromised user account or if the skill allows untrusted input to be stored as memories), they could inject instructions or adversarial prompts into the `content`. When this `content` is subsequently used by a host LLM, it could lead to prompt injection attacks, manipulating the LLM's behavior. This is an inherent risk for skills designed to provide context from user-controlled data. Implement robust sanitization or escaping mechanisms for `result.memory.content` before it is embedded into the LLM prompt string, especially if the skill can store memories from untrusted sources. Alternatively, ensure that the host LLM has strong defenses against prompt injection from its context window. | LLM | index.js:69 | |
| LOW | User-controlled memory content directly injected into LLM prompts The `get_context` function in `python/memorylayer_skill.py` constructs a string for LLM prompt injection by directly embedding user-provided memory `content`. If a malicious actor can store memories (e.g., through a compromised user account or if the skill allows untrusted input to be stored as memories), they could inject instructions or adversarial prompts into the `content`. When this `content` is subsequently used by a host LLM, it could lead to prompt injection attacks, manipulating the LLM's behavior. This is an inherent risk for skills designed to provide context from user-controlled data. Implement robust sanitization or escaping mechanisms for `content` before it is embedded into the LLM prompt string, especially if the skill can store memories from untrusted sources. Alternatively, ensure that the host LLM has strong defenses against prompt injection from its context window. | LLM | python/memorylayer_skill.py:144 |
Scan History
Embed Code
[](https://skillshield.io/report/69987ce33426ee9b)
Powered by SkillShield