Trust Assessment
mersoom-ai-client received a trust score of 58/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 6 findings: 0 critical, 2 high, 4 medium, and 0 low severity. Key findings include Suspicious import: requests, Hardcoded absolute paths for data storage, User-provided input stored without sanitization, posing downstream prompt injection risk.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 49/100, indicating areas for improvement.
Last analyzed on February 14, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings6
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| HIGH | Hardcoded absolute paths for data storage The skill uses hardcoded absolute file paths (`/home/sampple/clawd/memory/mersoom_logs` and `/home/sampple/clawd/memory/mersoom_memory/knowledge.json`) for logging and memory storage. This is problematic for portability and security. In a sandboxed environment, these paths might not be writable, leading to errors. If the skill's execution environment allows writing to these specific absolute paths, it could potentially write data outside its designated skill-specific directory, leading to unauthorized data modification or unintended data storage in a user's home directory. Replace hardcoded absolute paths with relative paths (e.g., `memory/mersoom_logs`) or use environment variables/configuration to define a base directory that is guaranteed to be within the skill's allowed write scope. For example, `os.path.join(os.getenv('SKILL_DATA_DIR', '.'), 'memory', 'mersoom_logs')`. | LLM | scripts/mersoom_api.py:10 | |
| HIGH | Hardcoded absolute paths for data storage The skill uses hardcoded absolute file paths (`/home/sampple/clawd/memory/mersoom_logs` and `/home/sampple/clawd/memory/mersoom_memory/knowledge.json`) for logging and memory storage. This is problematic for portability and security. In a sandboxed environment, these paths might not be writable, leading to errors. If the skill's execution environment allows writing to these specific absolute paths, it could potentially write data outside its designated skill-specific directory, leading to unauthorized data modification or unintended data storage in a user's home directory. Replace hardcoded absolute paths with relative paths (e.g., `memory/mersoom_memory/knowledge.json`) or use environment variables/configuration to define a base directory that is guaranteed to be within the skill's allowed write scope. For example, `os.path.join(os.getenv('SKILL_DATA_DIR', '.'), 'memory', 'mersoom_memory', 'knowledge.json')`. | LLM | scripts/mersoom_memory.py:5 | |
| MEDIUM | Suspicious import: requests Import of 'requests' detected. This module provides network or low-level system access. Verify this import is necessary. Network and system modules in skill code may indicate data exfiltration. | Static | skills/sampple-korea/mersoom-ai-client/scripts/mersoom_api.py:4 | |
| MEDIUM | User-provided input stored without sanitization, posing downstream prompt injection risk The skill stores user-provided inputs (e.g., `nickname`, `title`, `content`, `notes`, `summary`) directly into local log files (Markdown format) and a JSON memory file. If these stored files or the output generated from them (e.g., `get_context()` function's return value) are later consumed by an LLM without proper sanitization, a malicious user could embed prompt injection instructions within their input. This could potentially manipulate the LLM's behavior when processing this data. Implement input sanitization (e.g., escaping markdown characters, limiting length, filtering keywords) before storing user-provided data, especially if it's intended for LLM consumption. When retrieving and presenting this data to an LLM, ensure it's properly escaped or wrapped in delimiters to prevent it from being interpreted as instructions. | LLM | scripts/mersoom_api.py:30 | |
| MEDIUM | User-provided input stored without sanitization, posing downstream prompt injection risk The skill stores user-provided inputs (e.g., `nickname`, `title`, `content`, `notes`, `summary`) directly into local log files (Markdown format) and a JSON memory file. If these stored files or the output generated from them (e.g., `get_context()` function's return value) are later consumed by an LLM without proper sanitization, a malicious user could embed prompt injection instructions within their input. This could potentially manipulate the LLM's behavior when processing this data. Implement input sanitization (e.g., escaping markdown characters, limiting length, filtering keywords) before storing user-provided data, especially if it's intended for LLM consumption. When retrieving and presenting this data to an LLM, ensure it's properly escaped or wrapped in delimiters to prevent it from being interpreted as instructions. | LLM | scripts/mersoom_memory.py:20 | |
| MEDIUM | User-provided input stored without sanitization, posing downstream prompt injection risk The skill stores user-provided inputs (e.g., `nickname`, `title`, `content`, `notes`, `summary`) directly into local log files (Markdown format) and a JSON memory file. If these stored files or the output generated from them (e.g., `get_context()` function's return value) are later consumed by an LLM without proper sanitization, a malicious user could embed prompt injection instructions within their input. This could potentially manipulate the LLM's behavior when processing this data. Implement input sanitization (e.g., escaping markdown characters, limiting length, filtering keywords) before storing user-provided data, especially if it's intended for LLM consumption. When retrieving and presenting this data to an LLM, ensure it's properly escaped or wrapped in delimiters to prevent it from being interpreted as instructions. | LLM | scripts/mersoom_memory.py:30 |
Scan History
Embed Code
[](https://skillshield.io/report/f64398ea252ab66b)
Powered by SkillShield