Security Audit
segment-automation
github.com/sickn33/antigravity-awesome-skillsTrust Assessment
segment-automation received a trust score of 81/100, placing it in the Mostly Trusted category. This skill has passed most security checks with only minor considerations noted.
SkillShield's automated analysis identified 2 findings: 0 critical, 1 high, 1 medium, and 0 low severity. Key findings include Skill allows modification of Segment source configuration, Flexible data submission allows potential PII exfiltration.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on February 20, 2026 (commit e36d6fd3). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings2
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| HIGH | Skill allows modification of Segment source configuration The `SEGMENT_UPDATE_SOURCE` tool grants the ability to modify the configuration of a Segment data source. This could be exploited by a malicious prompt to redirect data to an attacker-controlled endpoint, disable data collection, or introduce other harmful changes to the data pipeline. The skill explicitly states "Source updates may affect data collection; review changes carefully," highlighting the significant impact of this tool. Restrict the `SEGMENT_UPDATE_SOURCE` tool to only allow specific, safe modifications, or require explicit human approval for any configuration changes. If full modification is necessary, implement robust input validation and sanitization to prevent malicious configurations. | LLM | SKILL.md:195 | |
| MEDIUM | Flexible data submission allows potential PII exfiltration The Segment tools (`SEGMENT_TRACK`, `SEGMENT_IDENTIFY`, `SEGMENT_BATCH`, etc.) allow for highly flexible data submission through "freeform objects" for `properties` and `traits`. While the skill warns against sending PII ("Avoid sending PII in traits unless destinations are configured for it"), a compromised LLM could be instructed to exfiltrate sensitive user data (e.g., from chat history or other accessible contexts) by packaging it into these freeform fields and sending it to Segment. This capability, while core to Segment's function, presents a data exfiltration risk if the LLM's inputs are not properly controlled. Implement strict data validation and sanitization on all data passed to Segment tools, especially for `properties` and `traits` fields. Ensure that the LLM is explicitly instructed and constrained not to include PII or other sensitive data unless absolutely necessary and approved. Consider redacting or masking sensitive information before it reaches the skill. | LLM | SKILL.md:60 |
Scan History
Embed Code
[](https://skillshield.io/report/1f8ad15d205bfdf6)
Powered by SkillShield