Security Audit
performance-profiling
github.com/sickn33/antigravity-awesome-skillsTrust Assessment
performance-profiling received a trust score of 27/100, placing it in the Untrusted category. This skill has significant security findings that require attention before use in production.
SkillShield's automated analysis identified 3 findings: 2 critical, 1 high, 0 medium, and 0 low severity. Key findings include Arbitrary command execution, Dangerous call: subprocess.run(), Unsanitized user input leads to argument injection in external command.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on February 20, 2026 (commit e36d6fd3). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings3
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Arbitrary command execution Python shell execution (os.system, subprocess) Review all shell execution calls. Ensure commands are static (not built from user input), use absolute paths, and are strictly necessary. Prefer library APIs over shell commands. | Manifest | skills/performance-profiling/scripts/lighthouse_audit.py:22 | |
| CRITICAL | Unsanitized user input leads to argument injection in external command The `run_lighthouse` function constructs a command to execute the `lighthouse` CLI tool using `subprocess.run`. The `url` argument, taken directly from `sys.argv[1]` without any sanitization or validation, is included in the command's argument list. An attacker can craft a malicious `url` string (e.g., `--output-path=/etc/passwd`) to inject arbitrary command-line arguments into the `lighthouse` command. This could allow writing to arbitrary files on the system, potentially leading to data corruption, privilege escalation, or remote code execution. Implement strict validation and sanitization for the `url` input. Ensure the input is a well-formed URL and does not contain characters that could be interpreted as command-line options by the `lighthouse` executable. Consider using a dedicated URL parsing library and explicitly disallowing any input that starts with `--`. | LLM | scripts/lighthouse_audit.py:24 | |
| HIGH | Dangerous call: subprocess.run() Call to 'subprocess.run()' detected in function 'run_lighthouse'. This can execute arbitrary code. Avoid using dangerous functions like exec/eval/os.system. Use safer alternatives. | Static | skills/performance-profiling/scripts/lighthouse_audit.py:22 |
Scan History
Embed Code
[](https://skillshield.io/report/5c518670e731be24)
Powered by SkillShield