COMMUNITY-DRIVEN

HOW TO
CONTRIBUTE

TrustVector is an open-source project that relies on community
contributions to evaluate AI systems.

What You Can Contribute

AI Models

Add evaluations for new LLMs, multimodal models, and specialized AI systems from any provider.

AI Agents

Evaluate agent frameworks like CrewAI, AutoGPT, LangGraph, and enterprise agent platforms.

MCP Servers

Add trust reports for Model Context Protocol servers that extend AI capabilities.

Quick Start Guide

1

Fork the Repository

Start by forking the TrustVector repository to your GitHub account.

git clone https://github.com/YOUR_USERNAME/trust-vector.git
2

Choose What to Evaluate

Create a new JSON file in the appropriate directory:

data/models/For AI models
data/agents/For AI agents
data/mcps/For MCP servers
3

Follow the Schema

Use existing files as templates. Every evaluation must include:

  • Five trust dimensions with scored criteria
  • Evidence with sources, URLs, and dates
  • Confidence levels (high, medium, low)
  • Use case ratings for different scenarios
  • Strengths and limitations
4

Submit a Pull Request

Open a PR with a clear description of what you've evaluated and why.

git checkout -b add-evaluation-[name] && git push origin HEAD

Data Schema Overview

example-evaluation.json
{
  "id": "unique-identifier",
  "type": "model" | "agent" | "mcp",
  "name": "Display Name",
  "provider": "Provider Name",
  "version": "1.0.0",
  "last_evaluated": "2025-01-14",
  "description": "Brief description...",
  "trust_vector": {
    "performance_reliability": {
      "overall_score": 85,
      "criteria": {
        "criterion_name": {
          "score": 85,
          "confidence": "high" | "medium" | "low",
          "evidence": [{
            "source": "Source Name",
            "url": "https://...",
            "date": "2025-01-14",
            "value": "Key finding..."
          }]
        }
      }
    },
    "security": { ... },
    "privacy_compliance": { ... },
    "trust_transparency": { ... },
    "operational_excellence": { ... }
  },
  "use_case_ratings": {
    "code-generation": { "overall": 90, "notes": "..." }
  },
  "strengths": ["..."],
  "limitations": ["..."]
}

Evidence Guidelines

Accepted Sources

  • Official documentation and technical papers
  • Peer-reviewed research and benchmarks
  • Security audits and compliance certifications
  • Official GitHub repositories
  • Reputable security research publications

Use With Caution

  • !Marketing materials (may be biased)
  • !Unverified community reports
  • !Outdated documentation (>6 months)
  • !Self-reported benchmarks without validation
  • !Anonymous or unattributed sources

Confidence Levels

High

Multiple authoritative sources

Official documentation, peer-reviewed research, recent data (within 3 months)

Medium

Some authoritative sources

Partial documentation, community feedback, data within 6 months

Low

Limited sources available

Older data, inferred from general practices, or single-source information

Currently Needed

MCP Servers

  • Supabase MCP Server
  • GitLab MCP Server
  • Perplexity MCP Server
  • Tavily MCP Server
  • Exa MCP Server
  • Context7 MCP Server
  • Google Maps MCP Server
  • ClickHouse MCP Server

AI Agents & Platforms

  • Kore.ai Enterprise Agents
  • Glean AI Platform
  • Sierra Customer Service
  • Moveworks Enterprise Assistant
  • Decagon Support AI
  • Aisera Service Automation
  • Cognigy Contact Center AI
  • Relevance AI Agents

Ready to Contribute?

Join our community of contributors helping build transparency in AI systems.