Trust Assessment
gitea received a trust score of 67/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 3 findings: 0 critical, 2 high, 1 medium, and 0 low severity. Key findings include Exposure of Repository Secrets and Variables, Arbitrary Gitea API Access, Direct Handling of Authentication Tokens.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 63/100, indicating areas for improvement.
Last analyzed on February 13, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings3
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| HIGH | Exposure of Repository Secrets and Variables The skill explicitly demonstrates commands to list repository secrets and variables. If an LLM executes these commands, it could lead to the exfiltration of sensitive configuration data, API keys, or other credentials stored as secrets or variables within the Gitea repository. This capability grants the LLM direct access to highly sensitive information. Restrict the skill's ability to execute `tea actions secrets list` and `tea actions variables list`. If these commands are necessary, implement strict access controls, require explicit user confirmation for execution, and ensure that the output is not logged or exposed to unauthorized parties. Consider using a more granular Gitea token that does not have permissions to list secrets/variables for the skill's operations. | LLM | SKILL.md:30 | |
| HIGH | Arbitrary Gitea API Access The skill exposes the `tea api` command, which allows making arbitrary API calls to the Gitea instance. This grants the LLM very broad permissions, limited only by the scope of the authentication token used. An attacker could craft prompts to make the LLM perform destructive actions (e.g., deleting repositories, users) or exfiltrate sensitive data from any accessible API endpoint, bypassing more specific command restrictions. Restrict the skill's ability to execute the `tea api` command. If API access is required, whitelist specific, safe API endpoints and methods that the skill is allowed to call, rather than allowing arbitrary API requests. Ensure the Gitea token used by the skill has the minimum necessary scope. | LLM | SKILL.md:39 | |
| MEDIUM | Direct Handling of Authentication Tokens The skill demonstrates the `tea login add` command, which directly accepts a `--token` argument. This means the skill is designed to process and potentially store sensitive Gitea authentication tokens. While this is a legitimate function for configuring the CLI, it introduces a risk if the LLM is prompted to handle or store tokens provided by a user or extracted from its context, potentially leading to credential exposure if the LLM's environment is compromised or if tokens are logged insecurely. Ensure that any tokens provided to the skill are handled securely by the LLM's execution environment, are not logged, and are not exposed in subsequent outputs. Consider using environment variables or secure credential stores instead of direct command-line arguments for sensitive tokens. Implement strict input validation and sanitization for token values. | LLM | SKILL.md:48 |
Scan History
Embed Code
[](https://skillshield.io/report/247d2dc03fd38523)
Powered by SkillShield