Trust Assessment
fda-consultant-specialist received a trust score of 65/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 3 findings: 0 critical, 3 high, 0 medium, and 0 low severity. Key findings include Potential Data Exfiltration via Path Traversal in Project Directory Argument.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 55/100, indicating areas for improvement.
Last analyzed on February 14, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings3
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| HIGH | Potential Data Exfiltration via Path Traversal in Project Directory Argument The Python scripts `fda_submission_tracker.py`, `hipaa_risk_assessment.py`, and `qsr_compliance_checker.py` accept a `project_dir` argument which is used to construct file paths and traverse directories (e.g., `Path(project_dir)`, `os.walk(project_dir)`). If the `project_dir` argument is not properly sanitized or validated, an attacker could use path traversal sequences (e.g., `../../../../etc`) to read arbitrary files outside the intended project directory. The content of these files could then be included in the script's output, leading to sensitive data disclosure. Implement robust path validation and sanitization for the `project_dir` argument in all affected scripts. Ensure that the provided path is canonicalized and strictly confined to an allowed, sandboxed directory. Consider using `os.path.abspath` combined with checks to ensure the path remains within a designated base directory, or leverage a secure file system access library. | LLM | scripts/fda_submission_tracker.py:40 | |
| HIGH | Potential Data Exfiltration via Path Traversal in Project Directory Argument The Python scripts `fda_submission_tracker.py`, `hipaa_risk_assessment.py`, and `qsr_compliance_checker.py` accept a `project_dir` argument which is used to construct file paths and traverse directories (e.g., `Path(project_dir)`, `os.walk(project_dir)`). If the `project_dir` argument is not properly sanitized or validated, an attacker could use path traversal sequences (e.g., `../../../../etc`) to read arbitrary files outside the intended project directory. The content of these files could then be included in the script's output, leading to sensitive data disclosure. Implement robust path validation and sanitization for the `project_dir` argument in all affected scripts. Ensure that the provided path is canonicalized and strictly confined to an allowed, sandboxed directory. Consider using `os.path.abspath` combined with checks to ensure the path remains within a designated base directory, or leverage a secure file system access library. | LLM | scripts/hipaa_risk_assessment.py:39 | |
| HIGH | Potential Data Exfiltration via Path Traversal in Project Directory Argument The Python scripts `fda_submission_tracker.py`, `hipaa_risk_assessment.py`, and `qsr_compliance_checker.py` accept a `project_dir` argument which is used to construct file paths and traverse directories (e.g., `Path(project_dir)`, `os.walk(project_dir)`). If the `project_dir` argument is not properly sanitized or validated, an attacker could use path traversal sequences (e.g., `../../../../etc`) to read arbitrary files outside the intended project directory. The content of these files could then be included in the script's output, leading to sensitive data disclosure. Implement robust path validation and sanitization for the `project_dir` argument in all affected scripts. Ensure that the provided path is canonicalized and strictly confined to an allowed, sandboxed directory. Consider using `os.path.abspath` combined with checks to ensure the path remains within a designated base directory, or leverage a secure file system access library. | LLM | scripts/qsr_compliance_checker.py:39 |
Scan History
Embed Code
[](https://skillshield.io/report/f1033e7b4f644719)
Powered by SkillShield