Trust Assessment
langchain-architecture received a trust score of 92/100, placing it in the Trusted category. This skill has passed all critical security checks and demonstrates strong security practices.
SkillShield's automated analysis identified 4 findings: 0 critical, 0 high, 2 medium, and 0 low severity. Key findings include LLM Security Risks Acknowledged, High-Level Mitigation Guidance for Prompt Injection and Data Exfiltration, Specific Mitigation for Tool Abuse and Command Execution Risks.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on February 12, 2026 (commit 5d65aa10). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings4
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| MEDIUM | High-Level Mitigation Guidance for Prompt Injection and Data Exfiltration The skill documentation provides high-level mitigation strategies such as 'Input Validation & Sanitization' and 'Output Sanitization' for prompt injection and data exfiltration. While correct, these general recommendations lack specific technical details or examples for effective implementation in a LangChain context, potentially leaving developers to discover best practices on their own. Enhance the skill documentation with more specific examples, best practices, or references to detailed guides on implementing robust input/output validation and sanitization techniques tailored for LLM applications and LangChain. | LLM | plugins/llm-application-dev/skills/langchain-architecture/SKILL.md:74 | |
| MEDIUM | Missing Explicit Guidance on Credential Harvesting and Advanced Prompt Injection (Hidden Instructions) While 'Data Exfiltration' and 'Prompt Injection' are mentioned, the documentation does not explicitly call out specific threats like credential harvesting or advanced prompt injection techniques (e.g., hidden instructions, indirect prompt injection via retrieved documents). Explicitly addressing these nuanced threats could further strengthen the security guidance provided by the skill. Add dedicated sections or expand existing ones to specifically discuss credential harvesting risks and advanced prompt injection vectors (like hidden instructions in external data sources or adversarial suffixes), along with tailored mitigation strategies. | LLM | plugins/llm-application-dev/skills/langchain-architecture/SKILL.md:67 | |
| INFO | LLM Security Risks Acknowledged The skill documentation explicitly identifies several critical security risks associated with building LLM applications using LangChain, including Prompt Injection, Data Exfiltration, and Tool Abuse. This demonstrates awareness of common threats in LLM development. N/A - This finding highlights a positive aspect of the documentation's security awareness. Applications built using this skill's knowledge should implement the mentioned mitigations. | LLM | plugins/llm-application-dev/skills/langchain-architecture/SKILL.md:65 | |
| INFO | Specific Mitigation for Tool Abuse and Command Execution Risks The skill documentation specifically addresses the risk of 'Tool Abuse,' including scenarios like 'shell access,' which directly relates to command abuse. It provides concrete mitigation strategies such as 'Least Privilege for Tools' and 'Human-in-the-Loop' for critical actions, which are strong recommendations for preventing unauthorized command execution. N/A - This finding highlights good security advice within the documentation. Applications built using this skill's knowledge should implement these specific mitigations. | LLM | plugins/llm-application-dev/skills/langchain-architecture/SKILL.md:76 |
Scan History
Embed Code
[](https://skillshield.io/report/00e7bf6c5f93fa14)
Powered by SkillShield