Trust Assessment
transcript-to-content received a trust score of 65/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 5 findings: 3 critical, 1 high, 1 medium, and 0 low severity. Key findings include Command Injection via `ls` with user-controlled path, Command Injection via `grep` with user-controlled keyword and path, Command Injection via `cp` with user-controlled paths.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 0/100, indicating areas for improvement.
Last analyzed on February 13, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings5
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Command Injection via `ls` with user-controlled path The skill instructs the LLM to execute a shell command `ls -lah /home/ubuntu/projects/[project-name]/`. If `[project-name]` is derived from untrusted user input, an attacker could inject arbitrary commands by manipulating the project name, leading to command injection. This could allow for arbitrary file system access, data exfiltration, or system compromise. Avoid direct execution of shell commands with user-controlled input. If file listing is necessary, use a safe, sandboxed API or tool that strictly validates and sanitizes input, or ensure `[project-name]` is selected from a predefined, trusted list. | LLM | SKILL.md:40 | |
| CRITICAL | Command Injection via `grep` with user-controlled keyword and path The skill instructs the LLM to execute a shell command `grep -ri "keyword" /home/ubuntu/projects/[project-name]/*.md`. Both `"keyword"` and `[project-name]` are placeholders likely to be filled by untrusted user input. An attacker could inject arbitrary commands through these parameters, leading to command injection. This could allow for arbitrary file system access, data exfiltration, or system compromise. Avoid direct execution of shell commands with user-controlled input. If searching is necessary, use a safe, sandboxed API or tool that strictly validates and sanitizes input, or ensure `[project-name]` and `keyword` are selected from trusted sources or properly escaped. | LLM | SKILL.md:45 | |
| CRITICAL | Command Injection via `cp` with user-controlled paths The skill instructs the LLM to execute a shell command `cp [logo-path] [project-dir]/logo.png`. If `[logo-path]` or `[project-dir]` are derived from untrusted user input, an attacker could inject arbitrary commands or manipulate file paths. This could lead to arbitrary file overwrites, data exfiltration (by copying sensitive files to an accessible location), or other forms of command injection. Avoid direct execution of shell commands with user-controlled input. If file copying is necessary, use a safe, sandboxed API or tool that strictly validates and sanitizes input, ensuring paths are within expected boundaries and not user-controlled. | LLM | SKILL.md:124 | |
| HIGH | Command Injection via `manus-export-slides` with user-controlled ID The skill instructs the LLM to execute a shell command `manus-export-slides manus-slides://[version-id] pdf`. If `[version-id]` is derived from untrusted user input, an attacker could potentially inject arbitrary commands or manipulate the tool's behavior, leading to command injection. While the immediate impact might be limited to the `manus-export-slides` tool, it still represents an uncontrolled shell execution point. Avoid direct execution of shell commands with user-controlled input. Ensure `[version-id]` is strictly validated against a known set of safe values or generated internally by the system, not directly from user input. | LLM | SKILL.md:130 | |
| MEDIUM | Excessive Permissions: Direct file system access for reading skill references The skill explicitly instructs the LLM to read local files like `/home/ubuntu/skills/transcript-to-content/references/master-knowledge-source-format.md` and `/home/ubuntu/skills/transcript-to-content/references/presentation-guidelines.md`. While these specific paths are within the skill's own directory and are likely intended, this demonstrates the LLM's capability to directly access and read files from the local file system. This capability, combined with other vulnerabilities (like command injection), could be leveraged for data exfiltration or to read sensitive configuration files if paths are not strictly controlled. Implement a sandboxed environment for file access. If file reading is necessary, use a dedicated, permission-controlled API that restricts access to only explicitly allowed files or directories, preventing arbitrary file system traversal. | LLM | SKILL.md:62 |
Scan History
Embed Code
[](https://skillshield.io/report/78eb3607ff6e5a41)
Powered by SkillShield