Trust Assessment
hubspot received a trust score of 81/100, placing it in the Mostly Trusted category. This skill has passed most security checks with only minor considerations noted.
SkillShield's automated analysis identified 2 findings: 0 critical, 1 high, 1 medium, and 0 low severity. Key findings include Session Token Exposure in Example Output, Sensitive CRM Data Exposure via Standard Output.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on February 12, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings2
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| HIGH | Session Token Exposure in Example Output The 'Get Connection' example demonstrates retrieving connection details, and the provided example response JSON explicitly includes a 'url' field containing a 'session_token'. Printing this response to standard output, as shown in the example, directly exposes a sensitive session token. If an AI agent executes this code and its output is not properly secured, this session token could be captured and potentially used for unauthorized access to Maton's connection management interface. Redact or mask sensitive information like session tokens from example outputs. Advise users to handle API responses containing credentials with extreme care and avoid printing them to insecure logs or standard output in production environments. | LLM | SKILL.md:79 | |
| MEDIUM | Sensitive CRM Data Exposure via Standard Output Multiple Python code examples for interacting with the HubSpot CRM API (e.g., listing contacts, companies, deals) print the full JSON responses to standard output. These responses are designed to contain sensitive customer relationship management data such as emails, names, phone numbers, company details, and deal information. If an AI agent executes these examples or similar code, and its output is not properly secured, this sensitive CRM data could be exposed or exfiltrated by a malicious prompt or insecure logging. Advise users to implement output sanitization, redaction, or selective parsing for sensitive fields when processing API responses, especially when the skill's output might be exposed to untrusted parties or stored in logs. Ensure the AI agent's environment and output channels are secure. | LLM | SKILL.md:15 |
Scan History
Embed Code
[](https://skillshield.io/report/09ff45fd6b0af391)
Powered by SkillShield