Trust Assessment
pyright-lsp received a trust score of 65/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 3 findings: 0 critical, 3 high, 0 medium, and 0 low severity. Key findings include Untrusted content recommends direct installation of external packages, Untrusted content describes command execution with user-controlled path, Untrusted content describes command execution with user-controlled directory.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 55/100, indicating areas for improvement.
Last analyzed on February 13, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings3
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| HIGH | Untrusted content recommends direct installation of external packages The skill documentation, treated as untrusted input, contains shell commands for installing `pyright` globally via `npm` (`npm install -g pyright`) and locally via `pip` (`pip install pyright`, `pipx install pyright`). If an AI agent were to interpret and execute these commands, it would lead to command injection, installing external software on the system. This also introduces a significant supply chain risk (SS-LLM-006), as the integrity of the installed packages (pyright from npm/PyPI) cannot be guaranteed, and a compromised package could lead to further system compromise. Global installation via `npm -g` is particularly concerning due to its broad impact. Installation of external dependencies should be handled by the skill's trusted environment setup (e.g., `requirements.txt`, `package.json` managed by the platform) or explicitly confirmed and sandboxed by the user. Direct installation commands within untrusted documentation should be removed or clearly marked as user-only instructions, not for agent execution. | LLM | SKILL.md:17 | |
| HIGH | Untrusted content describes command execution with user-controlled path The skill documentation, treated as untrusted input, provides an example of running `pyright` with a file path: `pyright path/to/file.py`. If an AI agent were to execute this command and substitute `path/to/file.py` with unsanitized user-provided input, it could lead to command injection. An attacker could craft a malicious path (e.g., `'; rm -rf /'`) to execute arbitrary commands. Ensure that any user-provided paths or project roots passed to external commands are strictly validated and sanitized to prevent command injection. Consider using a dedicated tool wrapper that handles argument parsing securely, avoiding direct concatenation into shell commands. | LLM | SKILL.md:30 | |
| HIGH | Untrusted content describes command execution with user-controlled directory The skill documentation, treated as untrusted input, provides an example of changing directory and running `pyright`: `cd project-root && pyright`. If an AI agent were to execute this command and substitute `project-root` with unsanitized user-provided input, it could lead to command injection. An attacker could craft a malicious directory name (e.g., `'; rm -rf /'`) to execute arbitrary commands. Ensure that any user-provided paths or project roots passed to external commands are strictly validated and sanitized to prevent command injection. Avoid directly concatenating untrusted input into shell commands. | LLM | SKILL.md:35 |
Scan History
Embed Code
[](https://skillshield.io/report/7145ff8253cce0b1)
Powered by SkillShield