Trust Assessment
google-photos received a trust score of 10/100, placing it in the Untrusted category. This skill has significant security findings that require attention before use in production.
SkillShield's automated analysis identified 8 findings: 3 critical, 2 high, 2 medium, and 1 low severity. Key findings include Network egress to untrusted endpoints, Suspicious import: requests, Potential data exfiltration: file read + network send.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The Manifest Analysis layer scored lowest at 10/100, indicating areas for improvement.
Last analyzed on February 13, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings8
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Network egress to untrusted endpoints Python requests POST/PUT to URL Review all outbound network calls. Remove connections to webhook collectors, paste sites, and raw IP addresses. Legitimate API calls should use well-known service domains. | Manifest | skills/jorgermp/google-photos/scripts/gphotos.py:56 | |
| CRITICAL | Network egress to untrusted endpoints Python requests POST/PUT to URL Review all outbound network calls. Remove connections to webhook collectors, paste sites, and raw IP addresses. Legitimate API calls should use well-known service domains. | Manifest | skills/jorgermp/google-photos/scripts/gphotos.py:68 | |
| CRITICAL | Network egress to untrusted endpoints Python requests POST/PUT to URL Review all outbound network calls. Remove connections to webhook collectors, paste sites, and raw IP addresses. Legitimate API calls should use well-known service domains. | Manifest | skills/jorgermp/google-photos/scripts/gphotos.py:93 | |
| HIGH | Potential data exfiltration: file read + network send Function 'upload_photo' reads files and sends data over the network. This may indicate data exfiltration. Review this function to ensure file contents are not being sent to external servers. | Static | skills/jorgermp/google-photos/scripts/gphotos.py:67 | |
| HIGH | Command Injection via Pickle Deserialization The `scripts/gphotos.py` script uses `pickle.load()` on a file path (`--token`) that is directly controlled by user input. Deserializing untrusted or maliciously crafted data with `pickle` can lead to arbitrary code execution on the system running the skill. While the skill itself generates a token file, it also allows loading an existing one, making it vulnerable if a user provides a path to a malicious pickle file. Avoid using `pickle` for user-controlled file paths. If `pickle` must be used, ensure the `token_path` is strictly controlled by the skill's internal logic and not directly by user input, or implement robust validation/sandboxing. Consider alternative, safer serialization formats like JSON for tokens if possible, or ensure the token file is always generated and managed internally by the skill in a secure, isolated location. | LLM | scripts/gphotos.py:20 | |
| MEDIUM | Suspicious import: requests Import of 'requests' detected. This module provides network or low-level system access. Verify this import is necessary. Network and system modules in skill code may indicate data exfiltration. | Static | skills/jorgermp/google-photos/scripts/gphotos.py:6 | |
| MEDIUM | Excessive File Read Permissions The `upload_photo` function in `scripts/gphotos.py` directly uses a user-provided file path (`--photo` argument) to open and read a file. This grants the skill the ability to read any file on the system that the user running the skill has permissions to access. While the content is uploaded to the user's own Google Photos account (not directly exfiltrated to a third party), this capability could be misused by a malicious user to read sensitive local files and then access them via their Google Photos account, effectively bypassing local access controls. Implement stricter validation on `photo_path`. For example, restrict uploads to a specific directory, validate file extensions, or prompt the user for explicit confirmation before reading files from arbitrary locations. If the skill is meant to operate within a sandboxed environment, this risk might be mitigated by the sandbox itself. | LLM | scripts/gphotos.py:70 | |
| LOW | Hardcoded Python Interpreter Path in Shebang The shebang line in `scripts/gphotos.py` uses an absolute, hardcoded path to the Python interpreter within a specific user's home directory (`#!/home/jorge/.openclaw/workspace/skills/google-photos/venv/bin/python3`). This makes the script non-portable and brittle. If the skill is deployed in a different environment or by a different user, this path will likely be invalid, leading to execution failure or potentially executing with an unintended interpreter if the path happens to exist and points to something else. While not a direct security vulnerability, it's a supply chain risk related to environment setup and reliability. Change the shebang to `#!/usr/bin/env python3` to rely on the system's PATH for locating the Python interpreter, improving portability. Ensure the skill's virtual environment is activated or correctly configured by the execution environment. | LLM | scripts/gphotos.py:1 |
Scan History
Embed Code
[](https://skillshield.io/report/8863a88ef67a83c6)
Powered by SkillShield