Trust Assessment
aetherlang received a trust score of 90/100, placing it in the Trusted category. This skill has passed all critical security checks and demonstrates strong security practices.
SkillShield's automated analysis identified 2 findings: 0 critical, 0 high, 2 medium, and 0 low severity. Key findings include Potential Command Injection via AetherLang Flow DSL, Potential Prompt Injection against Remote LLM.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on February 13, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings2
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| MEDIUM | Potential Command Injection via AetherLang Flow DSL The skill sends user-provided 'Flow DSL code' to a remote API (`api.neurodoc.app`) for processing and execution. While the skill claims to implement server-side validation and block code execution (`eval`, `exec`), OS commands, and other injection types, the processing of arbitrary user-defined code inherently carries a risk of command injection. If the parser, interpreter, or execution environment has vulnerabilities, or if the stated mitigations are insufficient, an attacker could craft malicious DSL code to execute arbitrary commands on the remote server. Ensure robust, multi-layered input validation and sandboxing for all user-provided DSL code. Regularly audit the security middleware and the DSL execution engine for vulnerabilities. Implement strict least-privilege principles for the execution environment. | LLM | SKILL.md:16 | |
| MEDIUM | Potential Prompt Injection against Remote LLM The skill sends user-provided 'natural language queries' to a remote API (`api.neurodoc.app`) which then utilizes AI models (e.g., GPT-4o). While the skill claims to block 'prompt manipulation' through server-side validation, prompt injection is a complex and evolving threat. Maliciously crafted queries could potentially bypass these mitigations, leading to unintended behavior, data leakage from the remote LLM's context, or manipulation of the LLM's output. Continuously update and improve prompt injection defenses. Implement advanced filtering, output validation, and potentially use a separate, hardened LLM for safety checks. Educate users on the risks of including sensitive information in queries. | LLM | SKILL.md:16 |
Scan History
Embed Code
[](https://skillshield.io/report/7abbe8b46c53dbe6)
Powered by SkillShield