Security Audit
typescript-expert
github.com/sickn33/antigravity-awesome-skillsTrust Assessment
typescript-expert received a trust score of 20/100, placing it in the Untrusted category. This skill has significant security findings that require attention before use in production.
SkillShield's automated analysis identified 6 findings: 3 critical, 3 high, 0 medium, and 0 low severity. Key findings include Arbitrary command execution, Dangerous call: subprocess.run(), Direct shell command execution in skill instructions.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 10/100, indicating areas for improvement.
Last analyzed on February 20, 2026 (commit e36d6fd3). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings6
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Arbitrary command execution Python shell execution (os.system, subprocess) Review all shell execution calls. Ensure commands are static (not built from user input), use absolute paths, and are strictly necessary. Prefer library APIs over shell commands. | Manifest | skills/typescript-expert/scripts/ts_diagnostic.py:16 | |
| CRITICAL | Direct shell command execution in skill instructions The skill's primary instructions in `SKILL.md` explicitly direct the LLM to execute various shell commands (`npx`, `npm`, `node`, `tsc`, `test`, `command -v`). These commands are executed directly by the host system. If any part of these commands (e.g., arguments, script names in `npm run`) can be influenced by untrusted input, it could lead to arbitrary command execution. Even without external influence, the ability to execute arbitrary shell commands is a high-risk operation for an AI agent, granting it broad control over the execution environment. Avoid direct execution of shell commands. Instead, use sandboxed environments, specific tool APIs, or strictly validated and parameterized commands. If shell execution is unavoidable, implement robust input sanitization and allow-listing of commands and arguments. | LLM | SKILL.md:20 | |
| CRITICAL | Dangerous `shell=True` usage in supporting diagnostic script The `scripts/ts_diagnostic.py` file, included as a supporting file for the skill, uses `subprocess.run(cmd, shell=True)` to execute shell commands. The `shell=True` argument is highly dangerous as it allows arbitrary command execution if any part of the `cmd` string can be influenced by untrusted input. If the LLM were to be manipulated (e.g., via a prompt injection instructing it to execute `python scripts/ts_diagnostic.py`), this script's internal use of `shell=True` would create a severe command injection vulnerability, allowing arbitrary commands to be executed on the host system. Never use `shell=True` with `subprocess.run` when executing commands that might involve untrusted input. Instead, pass commands as a list of arguments (e.g., `subprocess.run(['command', 'arg1', 'arg2'])`) to prevent shell injection. Implement strict input validation and sanitization for any command arguments. | LLM | scripts/ts_diagnostic.py:14 | |
| HIGH | Dangerous call: subprocess.run() Call to 'subprocess.run()' detected in function 'run_cmd'. This can execute arbitrary code. Avoid using dangerous functions like exec/eval/os.system. Use safer alternatives. | Static | skills/typescript-expert/scripts/ts_diagnostic.py:16 | |
| HIGH | Reading `package.json` and writing diagnostic logs in skill instructions The skill's primary instructions in `SKILL.md` direct the LLM to read `package.json` content using `node -e "require('./package.json')"` and to write diagnostic information to `resolution.log` and trace files. Reading local files like `package.json` can expose project dependencies and configurations. Writing diagnostic logs can expose sensitive project structure, file paths, or other internal details that should not be exfiltrated from the environment. Avoid reading arbitrary local files or writing diagnostic output to files that could be accessed or exfiltrated. If specific file content is needed, use a dedicated, sandboxed file access tool with strict path validation and content filtering. | LLM | SKILL.md:22 | |
| HIGH | Reading source code files via `grep` in supporting diagnostic script The `scripts/ts_diagnostic.py` file uses `grep -r ': any' ... src/` and `grep -r ' as ' ... src/` to scan source code files. This allows the script to read the content of potentially sensitive `.ts` and `.tsx` files within the `src/` directory. If the LLM were to be manipulated (e.g., via a prompt injection instructing it to execute `python scripts/ts_diagnostic.py`), this script could exfiltrate proprietary business logic, API endpoints, or other confidential information contained within the source code. Avoid reading arbitrary source code files. If specific code analysis is required, use dedicated, sandboxed tools that operate on a limited scope and do not expose raw file content. Ensure any data read is strictly filtered and anonymized before being processed or output. | LLM | scripts/ts_diagnostic.py:100 |
Scan History
Embed Code
[](https://skillshield.io/report/cd8de489b3451085)
Powered by SkillShield