Trust Assessment
senior-architect received a trust score of 67/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 3 findings: 0 critical, 2 high, 1 medium, and 0 low severity. Key findings include Unsafe deserialization / dynamic eval, Path Traversal via unsanitized project path, Execution of external package manager commands.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on February 13, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings3
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| HIGH | Path Traversal via unsanitized project path The Python scripts `scripts/architecture_diagram_generator.py`, `scripts/dependency_analyzer.py`, and `scripts/project_architect.py` use the `project_path` argument directly in file system operations (`Path(project_path)`, `project_path.iterdir()`, `project_path.rglob('*')`, `file_path.read_text()`) without proper sanitization. If the LLM invokes these scripts with a user-controlled `project_path`, an attacker could provide a path like `../../../../etc/passwd` to read arbitrary files outside the intended project directory, leading to data exfiltration. Sanitize the `project_path` argument in all affected scripts (`architecture_diagram_generator.py`, `dependency_analyzer.py`, `project_architect.py`) to ensure it is within the allowed project directory. Use `pathlib.Path.resolve()` and check if the resolved path is a subpath of a trusted base directory. Alternatively, restrict the LLM's ability to provide arbitrary paths. | LLM | scripts/architecture_diagram_generator.py:40 | |
| HIGH | Execution of external package manager commands The `scripts/dependency_analyzer.py` script invokes external package manager commands (`go list`, `npm list`, `pip list`, `cargo tree`) via an internal `_run_command` method. While the arguments to these commands are hardcoded, the execution of external binaries on a user-controlled project directory (which might be the current working directory for these commands) grants significant capabilities. This could lead to command injection if the environment (e.g., PATH) is compromised, or if the tools themselves have vulnerabilities that can be triggered by their input (the project structure). Avoid direct execution of external commands if possible. If necessary, ensure that the execution environment is tightly controlled, that the commands are executed with the least privilege, and that the `PATH` is sanitized. Consider using language-specific libraries to parse dependency files instead of relying on external CLI tools. Implement robust input validation and sandboxing for the execution environment. | LLM | scripts/dependency_analyzer.py:160 | |
| MEDIUM | Unsafe deserialization / dynamic eval Decryption followed by code execution Remove obfuscated code execution patterns. Legitimate code does not need base64-encoded payloads executed via eval, encrypted-then-executed blobs, or dynamic attribute resolution to call system functions. | Manifest | skills/alirezarezvani/senior-architect/scripts/project_architect.py:11 |
Scan History
Embed Code
[](https://skillshield.io/report/5ffb8948a5d909e3)
Powered by SkillShield