Security Audit
skill-creator
github.com/algorand-devrel/algorand-agent-skillsTrust Assessment
skill-creator received a trust score of 65/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 2 findings: 1 critical, 1 high, 0 medium, and 0 low severity. Key findings include Arbitrary Code Execution by Testing User-Provided Scripts, Potential Command Injection in Skill Utility Scripts.
The analysis covered 4 layers: manifest_analysis, llm_behavioral_safety, dependency_graph, static_code_analysis. The llm_behavioral_safety layer scored lowest at 55/100, indicating areas for improvement.
Last analyzed on February 11, 2026 (commit aafc1c60). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings2
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Arbitrary Code Execution by Testing User-Provided Scripts The skill instructs the agent to test newly created scripts by 'actually running them'. During the skill creation process, the user provides the content for these scripts. This workflow directs the agent to execute arbitrary, potentially malicious code provided by the user under the guise of testing, leading to a critical remote code execution vulnerability. Remove the instruction to automatically run and test user-provided scripts. Instead, recommend static analysis, linting, or sandboxed execution. If execution is absolutely necessary, it should only be done after explicit, informed user consent for each execution, clearly stating the security risks. | Unknown | SKILL.md:308 | |
| HIGH | Potential Command Injection in Skill Utility Scripts The skill instructs the agent to execute `init_skill.py` and `package_skill.py` scripts using user-provided input (e.g., skill name, paths) as command-line arguments. If the agent's execution environment constructs the shell command by insecurely concatenating these strings, a malicious user could provide a crafted argument (e.g., 'my-skill; rm -rf /') to execute arbitrary commands on the host system. Advise the agent to use safe methods for executing subprocesses that avoid shell interpretation of arguments. For example, in Python, arguments should be passed as a list to `subprocess.run(..., shell=False)`. The skill should also recommend input validation and sanitization on user-provided arguments like `<skill-name>` and `<output-directory>` to prevent path traversal and other injection attacks. | Unknown | SKILL.md:272 |
Scan History
Embed Code
[](https://skillshield.io/report/6e010c92bec9c427)
Powered by SkillShield