Trust Assessment
figma-sync received a trust score of 51/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 6 findings: 0 critical, 3 high, 3 medium, and 0 low severity. Key findings include Suspicious import: requests, Arbitrary file write via `output_dir` argument, Arbitrary directory creation via `file_key` argument.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 48/100, indicating areas for improvement.
Last analyzed on February 13, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings6
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| HIGH | Arbitrary file write via `output_dir` argument The `output_dir` argument, which is user-controlled via command-line arguments in `figma_diff.py`, `figma_preview.py`, `figma_pull.py`, and `figma_push.py`, is used directly to construct file paths for writing output (e.g., `diff.json`, `preview.json`, `designModel.json`, `pluginSpec.json`). An attacker can use path traversal sequences (e.g., `../../`) in `output_dir` to write files to arbitrary locations on the filesystem, potentially overwriting critical system files or placing malicious files in unexpected locations. Implement strict validation and sanitization for the `output_dir` argument. Ensure it resolves to a path within an allowed, confined directory (e.g., a subdirectory of the current working directory or a temporary directory) and does not contain path traversal sequences. | LLM | scripts/figma_common.py:120 | |
| HIGH | Arbitrary directory creation via `file_key` argument The `file_key` argument, which is user-controlled via command-line arguments in `figma_diff.py`, `figma_preview.py`, `figma_pull.py`, and `figma_push.py`, is used directly to construct the path for the Figma cache directory (`.figma-cache/<file_key>`). An attacker can use path traversal sequences (e.g., `../../`) in `file_key` to create directories at arbitrary locations on the filesystem, potentially polluting the system or creating directories in sensitive areas. Implement strict validation and sanitization for the `file_key` argument. Ensure it conforms to the expected Figma file key format and does not contain path traversal sequences. | LLM | scripts/figma_common.py:40 | |
| HIGH | Arbitrary file read via user-controlled path arguments The skill reads content from user-specified file paths such as `local_model_path` (in `figma_diff.py`), `operations_path` (in `figma_preview.py`), and `patch_spec_path` (in `figma_push.py`). These arguments are directly used with `Path(...).read_text()`. An attacker can provide a path to any file on the system (e.g., `/etc/passwd`, `~/.ssh/id_rsa`) to read its contents. While the content is processed locally, the ability to read arbitrary files represents a significant information disclosure vulnerability. Implement strict validation and sanitization for file path arguments. Restrict file access to specific, expected directories or file types, and prevent path traversal sequences. | LLM | scripts/figma_diff.py:99 | |
| MEDIUM | Suspicious import: requests Import of 'requests' detected. This module provides network or low-level system access. Verify this import is necessary. Network and system modules in skill code may indicate data exfiltration. | Static | skills/kristinadarroch/figma-sync/scripts/figma_common.py:12 | |
| MEDIUM | Suspicious import: requests Import of 'requests' detected. This module provides network or low-level system access. Verify this import is necessary. Network and system modules in skill code may indicate data exfiltration. | Static | skills/kristinadarroch/figma-sync/scripts/figma_pull.py:11 | |
| MEDIUM | Potential code injection in generated output The `generate_code` function in `scripts/figma_pull.py` constructs code (TypeScript/JavaScript) using f-strings, incorporating data directly from Figma nodes, such as `textContent`. While there is an attempt to escape single quotes (`prop_value.replace("'", "\\'")`), other characters (e.g., newlines, backticks for template literals, or specific escape sequences in the target language) could potentially break out of string literals or introduce new code constructs in the generated files. This poses a risk to any system that subsequently consumes and executes the generated code, as it could lead to arbitrary code execution in the consumer's environment. The skill itself is not directly vulnerable to this, but it acts as a vector for downstream attacks. Implement comprehensive escaping for all data incorporated into generated code, considering the full syntax and escape rules of the target languages (TypeScript, JavaScript). This might involve using a dedicated templating engine with auto-escaping features or a more robust custom escaping mechanism that handles all potentially dangerous characters (e.g., `\`, `\n`, `\r`, `\t`, backticks, etc.) for the specific context (e.g., string literals, identifiers). | LLM | scripts/figma_pull.py:600 |
Scan History
Embed Code
[](https://skillshield.io/report/5d7ce87509c0d742)
Powered by SkillShield