Trust Assessment
onboard-gen received a trust score of 58/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 4 findings: 1 critical, 1 high, 2 medium, and 0 low severity. Key findings include Unpinned npm dependency version, Prompt Injection via User-Controlled Files, Data Exfiltration via LLM Prompt.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 48/100, indicating areas for improvement.
Last analyzed on February 13, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings4
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Prompt Injection via User-Controlled Files The skill directly embeds content from user-controlled project files (package.json, README.md, and file paths) into the LLM prompt. A malicious actor could craft these files to include instructions that manipulate the host LLM, leading to prompt injection attacks. This could bypass the system prompt and force the LLM to perform unintended actions, such as revealing sensitive information or generating harmful content. Implement robust input sanitization or a clear separation between user-provided content and system instructions. Consider using a dedicated tool call for file content rather than direct prompt injection. Warn users about the risk of including sensitive or malicious instructions in project files and advise against running the tool in untrusted directories. | LLM | src/index.ts:13 | |
| HIGH | Data Exfiltration via LLM Prompt The skill reads the entire 'package.json' file, the first 3000 characters of 'README.md', and recursively lists up to 100 file paths from the current working directory. This collected 'context' is then sent to an external LLM service (OpenAI). If sensitive information (e.g., API keys, secrets, PII, proprietary code snippets) is present in these files or their names, it will be exfiltrated to the LLM provider. While the skill's purpose is to analyze the project, sending potentially sensitive data without explicit user consent or redaction mechanisms poses a significant data privacy risk. Implement explicit user consent before sending file contents to an external service. Provide options for users to redact sensitive information or specify which files/directories to include/exclude. Clearly document the data transmission practices and potential risks. Consider client-side redaction of common sensitive patterns before sending to the LLM. | LLM | src/index.ts:9 | |
| MEDIUM | Unpinned npm dependency version Dependency 'commander' is not pinned to an exact version ('^12.1.0'). Pin dependencies to exact versions to reduce drift and supply-chain risk. | Dependencies | skills/lxgicstudios/onboarding-gen/package.json | |
| MEDIUM | Excessive File System Read Permissions The skill requests broad file system read access by recursively listing files in the entire current working directory (`process.cwd()`) using `fs.readdirSync(cwd, { recursive: true })`. Although only the first 100 file paths are sent to the LLM, the initial recursive scan grants access to all files and directories within the project. This broad permission, combined with the data exfiltration vector, increases the risk of exposing sensitive file paths or contents that are not strictly necessary for the skill's stated purpose. Narrow the scope of file system access to only essential directories and file types. For example, allow users to specify a list of files/directories to scan, or use a `.gitignore`-like mechanism to exclude sensitive paths. Only read files that are explicitly required for the onboarding guide generation. | LLM | src/index.ts:11 |
Scan History
Embed Code
[](https://skillshield.io/report/28c895865817c76c)
Powered by SkillShield