Trust Assessment
linkedin-monitor received a trust score of 10/100, placing it in the Untrusted category. This skill has significant security findings that require attention before use in production.
SkillShield's automated analysis identified 16 findings: 2 critical, 6 high, 7 medium, and 1 low severity. Key findings include Persistence / self-modification instructions, Unsafe deserialization / dynamic eval, Sensitive path access: AI agent config.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The Static Code Analysis layer scored lowest at 5/100, indicating areas for improvement.
Last analyzed on February 14, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings16
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Persistence / self-modification instructions Crontab manipulation (list/remove/edit) Remove any persistence mechanisms. Skills should not modify system startup configurations, crontabs, LaunchAgents, systemd services, or shell profiles. | Manifest | skills/dylanbaker24/linkedin-monitor/scripts/health.sh:137 | |
| CRITICAL | Prompt Injection via Untrusted Message Content in Cron Output The `cron-wrapper.sh` script directly embeds untrusted LinkedIn message content (participant name and message text) into the output that is processed by the host LLM. An attacker sending a specially crafted LinkedIn message could inject instructions into the LLM, leading to arbitrary actions, data exfiltration, or manipulation of the LLM's behavior. The full JSON result, including untrusted message details, is also embedded. Sanitize all untrusted user input before embedding it into prompts or instructions for the LLM. Instead of directly embedding the raw message text, consider using a placeholder and providing the message content as a separate, clearly delineated input to the LLM, or use a structured data format that prevents instruction injection. Ensure the LLM is sandboxed and cannot execute arbitrary commands or access sensitive resources based on user input. | LLM | scripts/cron-wrapper.sh:69 | |
| HIGH | Sensitive path access: AI agent config Access to AI agent config path detected: '~/.clawdbot/'. This may indicate credential theft. Verify that access to this sensitive path is justified and declared. | Static | skills/dylanbaker24/linkedin-monitor/SKILL.md:57 | |
| HIGH | Sensitive path access: AI agent config Access to AI agent config path detected: '~/.clawdbot/'. This may indicate credential theft. Verify that access to this sensitive path is justified and declared. | Static | skills/dylanbaker24/linkedin-monitor/SKILL.md:112 | |
| HIGH | Sensitive path access: AI agent config Access to AI agent config path detected: '~/.clawdbot/'. This may indicate credential theft. Verify that access to this sensitive path is justified and declared. | Static | skills/dylanbaker24/linkedin-monitor/SKILL.md:152 | |
| HIGH | Sensitive path access: AI agent config Access to AI agent config path detected: '~/.clawdbot/'. This may indicate credential theft. Verify that access to this sensitive path is justified and declared. | Static | skills/dylanbaker24/linkedin-monitor/scripts/check-browser.sh:43 | |
| HIGH | Prompt Injection Risk in Browser-based Reply Drafting The `check-browser.sh` script instructs the Clawdbot's browser tool to 'Draft replies using USER.md communication style' based on new inbound messages. If an inbound message contains prompt injection attempts, these could manipulate the LLM's reply generation process. Additionally, the `USER.md` file, which defines the communication style, is user-controlled and could itself be crafted to contain malicious instructions. Implement robust sanitization and input validation for all untrusted content (inbound messages) before it is used by the LLM for reply generation. Ensure the LLM operates within a strict sandbox and cannot perform unauthorized actions. Review the `USER.md` file for any potential prompt injection vectors and advise users on safe content for such configuration files. | LLM | scripts/check-browser.sh:37 | |
| HIGH | Direct Credential Harvesting and Storage The `lk.py` script's `cmd_auth_setup` function interactively prompts the user for highly sensitive LinkedIn session cookies (`li_at` and `jsessionid`) and saves them directly to `~/.clawdbot/linkedin-monitor/credentials.json`. While this is intended functionality for the skill, it represents a direct capture and storage of credentials. If this file is compromised, an attacker could gain full access to the user's LinkedIn account. Avoid storing raw session cookies directly. If possible, use OAuth or other token-based authentication mechanisms that provide refresh tokens instead of long-lived session cookies. If direct cookie storage is unavoidable, ensure the `credentials.json` file has the strictest possible file permissions (e.g., `chmod 600`) and is encrypted at rest. Implement mechanisms to detect and alert on unauthorized access to this file. | LLM | scripts/lk.py:140 | |
| MEDIUM | Unsafe deserialization / dynamic eval Decryption followed by code execution Remove obfuscated code execution patterns. Legitimate code does not need base64-encoded payloads executed via eval, encrypted-then-executed blobs, or dynamic attribute resolution to call system functions. | Manifest | skills/dylanbaker24/linkedin-monitor/scripts/lk.py:4 | |
| MEDIUM | Sensitive environment variable access: $HOME Access to sensitive environment variable '$HOME' detected in shell context. Verify this environment variable access is necessary and the value is not exfiltrated. | Static | skills/dylanbaker24/linkedin-monitor/scripts/check-browser.sh:8 | |
| MEDIUM | Sensitive environment variable access: $HOME Access to sensitive environment variable '$HOME' detected in shell context. Verify this environment variable access is necessary and the value is not exfiltrated. | Static | skills/dylanbaker24/linkedin-monitor/scripts/check.sh:8 | |
| MEDIUM | Sensitive environment variable access: $HOME Access to sensitive environment variable '$HOME' detected in shell context. Verify this environment variable access is necessary and the value is not exfiltrated. | Static | skills/dylanbaker24/linkedin-monitor/scripts/cron-wrapper.sh:8 | |
| MEDIUM | Sensitive environment variable access: $HOME Access to sensitive environment variable '$HOME' detected in shell context. Verify this environment variable access is necessary and the value is not exfiltrated. | Static | skills/dylanbaker24/linkedin-monitor/scripts/health.sh:6 | |
| MEDIUM | Sensitive environment variable access: $HOME Access to sensitive environment variable '$HOME' detected in shell context. Verify this environment variable access is necessary and the value is not exfiltrated. | Static | skills/dylanbaker24/linkedin-monitor/scripts/state.sh:5 | |
| MEDIUM | Unpinned Python Dependency The `lk.py` script imports the `linkedin_api` Python package without specifying a version. This makes the skill vulnerable to supply chain attacks, such as dependency confusion or malicious package updates. If a malicious version of `linkedin_api` is published, it could be installed and executed. Pin the version of `linkedin-api` in a `requirements.txt` file or directly in the installation command (e.g., `pip3 install linkedin-api==X.Y.Z`). Regularly review and update pinned dependencies to ensure security patches are applied while maintaining version control. | LLM | scripts/lk.py:10 | |
| LOW | Node lockfile missing package.json is present but no lockfile was found (package-lock.json, pnpm-lock.yaml, or yarn.lock). Commit a lockfile for deterministic dependency resolution. | Dependencies | skills/dylanbaker24/linkedin-monitor/package.json |
Scan History
Embed Code
[](https://skillshield.io/report/c9dd718074a3fd02)
Powered by SkillShield