Trust Assessment
bark-push received a trust score of 74/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 4 findings: 0 critical, 1 high, 2 medium, and 1 low severity. Key findings include Suspicious import: urllib.request, Potential Data Exfiltration via Configurable API Endpoint, Sensitive Credentials Stored in Plaintext Configuration.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on February 13, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings4
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| HIGH | Potential Data Exfiltration via Configurable API Endpoint The skill allows users to configure the `default_push_url` in `config.json` or via the `BARK_PUSH_CONFIG` environment variable. All sensitive data, including user `device_key`s, `ciphertext` (if configured), and message `content`, are sent to this URL. If an attacker can trick a user into configuring a malicious `default_push_url` (e.g., through a compromised skill update, social engineering, or a malicious `config.json` file), all data sent by the skill will be exfiltrated to the attacker's server. Implement strict validation or whitelisting for `default_push_url` if possible, or clearly warn users about the security implications of setting this URL to an untrusted endpoint. Ensure the skill's documentation prominently highlights this risk. | LLM | bark_push/command_handler.py:36 | |
| MEDIUM | Suspicious import: urllib.request Import of 'urllib.request' detected. This module provides network or low-level system access. Verify this import is necessary. Network and system modules in skill code may indicate data exfiltration. | Static | skills/liberalchang/barkpush/bark_push/bark_api.py:7 | |
| MEDIUM | Sensitive Credentials Stored in Plaintext Configuration The skill stores sensitive information such as Bark `device_key`s and an optional `ciphertext` in `config.json` in plaintext. It also reads these from environment variables (`BARK_USERS`, `BARK_CIPHERTEXT`). While this is common for configuration, it poses a risk if the `config.json` file or the environment variables are compromised. Any attacker gaining access to the system or the configuration file could easily retrieve these credentials. Advise users to store `device_key`s and `ciphertext` in a secure secrets management system (e.g., environment variables managed by a secure orchestrator, or a dedicated secrets vault) rather than directly in `config.json`. If `config.json` must be used, recommend strong filesystem permissions to restrict access. For `ciphertext`, consider using a key derivation function and storing only a hash, or encrypting the `ciphertext` itself with a master key. | LLM | bark_push/config_manager.py:102 | |
| LOW | Unvalidated `action` Parameter May Lead to Client-Side Issues The `action` parameter allows users to provide a custom JSON string, which is then included in the payload sent to the Bark API. While the skill itself correctly serializes this JSON string within the overall request payload, the ultimate interpretation and rendering of this `action` by the Bark client application (on the user's device) is outside the skill's control. If the Bark client app does not properly sanitize or validate the contents of this `action` JSON, it could potentially lead to client-side vulnerabilities such as Cross-Site Scripting (XSS) in web views or unexpected behavior in native applications, especially if the `action` field is designed to execute code or render dynamic content. Add a warning in the skill's documentation about the `action` parameter, advising users to only use trusted JSON structures. Recommend that the Bark API and client applications implement robust input validation and sanitization for all fields, especially those that can contain arbitrary user-provided data like `action`. | LLM | bark_push/command_handler.py:170 |
Scan History
Embed Code
[](https://skillshield.io/report/9c929cc612a31bb0)
Powered by SkillShield