Security Audit
lint-and-validate
github.com/sickn33/antigravity-awesome-skillsTrust Assessment
lint-and-validate received a trust score of 20/100, placing it in the Untrusted category. This skill has significant security findings that require attention before use in production.
SkillShield's automated analysis identified 4 findings: 3 critical, 1 high, 0 medium, and 0 low severity. Key findings include Arbitrary command execution, Dangerous call: subprocess.run(), Untrusted instructions to LLM (Prompt Injection).
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 40/100, indicating areas for improvement.
Last analyzed on February 20, 2026 (commit e36d6fd3). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings4
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Arbitrary command execution Python shell execution (os.system, subprocess) Review all shell execution calls. Ensure commands are static (not built from user input), use absolute paths, and are strictly necessary. Prefer library APIs over shell commands. | Manifest | skills/lint-and-validate/scripts/lint_runner.py:80 | |
| CRITICAL | Untrusted instructions to LLM (Prompt Injection) The SKILL.md file contains direct instructions to the LLM, such as 'MANDATORY: Run appropriate validation tools...' and 'Strict Rule: No code should be committed...'. These instructions are embedded within the untrusted input block and attempt to manipulate the LLM's behavior, which is a form of prompt injection. Remove all direct instructions to the LLM from the untrusted skill content. LLM instructions should be part of the system prompt or skill definition, not the skill's user-facing documentation. | LLM | SKILL.md:3 | |
| CRITICAL | Arbitrary command execution via 'npm run lint' in untrusted project directory The `scripts/lint_runner.py` script takes `project_path` from `sys.argv[1]`, which is untrusted input. If the detected project type is Node.js and a `lint` script is found in `package.json`, the script executes `npm run lint` within the provided `project_path` as the current working directory (`cwd`). An attacker can craft a malicious `package.json` file in a directory and provide that directory as `project_path`. The `lint` script in this `package.json` could then contain arbitrary shell commands, leading to command injection and potential remote code execution. 1. **Sanitize/Validate `project_path`:** Implement strict validation for `project_path` to ensure it points to a known safe location or adheres to strict naming conventions. 2. **Isolate execution:** Run `npm run lint` within a heavily sandboxed environment (e.g., a container with minimal permissions, restricted network access, and a read-only filesystem for everything outside the project directory). 3. **Avoid `npm run` for untrusted projects:** Instead of `npm run lint`, consider parsing `package.json` and directly invoking the linter (e.g., `npx eslint ...`) with specific, safe arguments, rather than executing arbitrary scripts defined by the user. This requires more complex parsing and understanding of various linter configurations. 4. **Least Privilege:** Ensure the user running the skill has the absolute minimum necessary permissions. | LLM | scripts/lint_runner.py:60 | |
| HIGH | Dangerous call: subprocess.run() Call to 'subprocess.run()' detected in function 'run_linter'. This can execute arbitrary code. Avoid using dangerous functions like exec/eval/os.system. Use safer alternatives. | Static | skills/lint-and-validate/scripts/lint_runner.py:80 |
Scan History
Embed Code
[](https://skillshield.io/report/eacf63c0b47d41fa)
Powered by SkillShield