Trust Assessment
gcloud received a trust score of 10/100, placing it in the Untrusted category. This skill has significant security findings that require attention before use in production.
SkillShield's automated analysis identified 9 findings: 2 critical, 3 high, 3 medium, and 1 low severity. Key findings include Persistence / self-modification instructions, Persistence mechanism: Shell RC file modification, Potential for Command Injection through CLI Arguments.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The Manifest Analysis layer scored lowest at 40/100, indicating areas for improvement.
Last analyzed on February 13, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings9
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Persistence / self-modification instructions Shell RC file modification for persistence Remove any persistence mechanisms. Skills should not modify system startup configurations, crontabs, LaunchAgents, systemd services, or shell profiles. | Manifest | skills/jortega0033/gcloud/SKILL.md:15 | |
| CRITICAL | Persistence / self-modification instructions Shell RC file modification for persistence Remove any persistence mechanisms. Skills should not modify system startup configurations, crontabs, LaunchAgents, systemd services, or shell profiles. | Manifest | skills/jortega0033/gcloud/SKILL.md:19 | |
| HIGH | Potential for Command Injection through CLI Arguments The skill provides numerous examples of `gcloud`, `gsutil`, and `firebase` CLI commands. If an AI agent executes these commands with user-controlled input for arguments (e.g., instance names, project IDs, command strings for SSH, environment variable values, file paths), it could lead to command injection. For example, `gcloud compute ssh ... --command "..."` allows arbitrary command execution on a remote VM, and `gcloud run services update ... --set-env-vars "KEY1=value1,KEY2=value2"` could allow injection into environment variables if values are not properly sanitized. Implement robust input validation and sanitization for all user-provided arguments before constructing and executing shell commands. Avoid directly concatenating user input into shell commands. Consider using libraries that safely wrap CLI calls or escape arguments. | LLM | SKILL.md:69 | |
| HIGH | Capability to Publicly Expose Cloud Storage Data The skill explicitly demonstrates the `gsutil iam ch allUsers:objectViewer gs://BUCKET_NAME` command, which grants public read access to an entire Cloud Storage bucket. If an AI agent is instructed to use this command, it could inadvertently or maliciously expose sensitive data to the internet. This command requires significant IAM permissions and represents a high-risk operation. Restrict the AI agent's IAM permissions to the absolute minimum necessary. Implement strict guardrails and human oversight for commands that modify IAM policies or public access settings. Confirm user intent for such destructive or public-facing actions. | LLM | SKILL.md:179 | |
| HIGH | Direct Access to Secrets via Secret Manager The skill provides commands like `gcloud secrets versions access latest --secret=SECRET_NAME` which directly retrieve secret values from Google Secret Manager. If an AI agent is granted access to this skill and has the necessary IAM permissions, it can be instructed to retrieve any secret it has access to, potentially leading to the exfiltration of sensitive credentials or data. Implement strict access controls (IAM policies) for the AI agent, granting only read access to specific secrets required for its legitimate functions. Avoid giving broad `secretmanager.secrets.access` permissions. Implement logging and alerting for secret access. | LLM | SKILL.md:260 | |
| MEDIUM | Persistence mechanism: Shell RC file modification Detected Shell RC file modification pattern. Persistence mechanisms allow malware to survive system restarts. Review this persistence pattern. Skills should not modify system startup configuration. | Static | skills/jortega0033/gcloud/SKILL.md:15 | |
| MEDIUM | Persistence mechanism: Shell RC file modification Detected Shell RC file modification pattern. Persistence mechanisms allow malware to survive system restarts. Review this persistence pattern. Skills should not modify system startup configuration. | Static | skills/jortega0033/gcloud/SKILL.md:19 | |
| MEDIUM | Access to Potentially Sensitive Logs and VM Output The skill includes commands to read VM serial port output (`gcloud compute instances get-serial-port-output`) and various Cloud Logging entries (`gcloud logging read`, `gcloud run services logs read`). These logs and outputs can contain sensitive information, debugging data, or even credentials. An AI agent with access to these commands could be prompted to retrieve and potentially exfiltrate this sensitive data. Restrict the AI agent's IAM permissions to only access logs and outputs strictly necessary for its function. Implement data loss prevention (DLP) measures if logs are processed or summarized by the agent. Ensure sensitive data is not logged in the first place. | LLM | SKILL.md:77 | |
| LOW | Unpinned Global NPM Package Installation The installation instructions include `npm install -g firebase-tools`. This command installs the latest version of `firebase-tools` globally without specifying a version. While `firebase-tools` is a legitimate package, installing unpinned global packages can introduce supply chain risks if a malicious update is pushed to the npm registry or if a typosquatted package is accidentally installed. For production environments or automated setups, consider pinning package versions (e.g., `npm install -g firebase-tools@X.Y.Z`) or using a package manager that allows for lockfiles to ensure deterministic installations. Regularly audit installed packages for vulnerabilities. | LLM | SKILL.md:30 |
Scan History
Embed Code
[](https://skillshield.io/report/a7d14d215f47d19d)
Powered by SkillShield