Skip to content
openai|openai/gpt-5.2-codex|$1.75/M|400K|allenai|allenai/molmo-2-8b:free|Free|37K|allenai|allenai/olmo-3.1-32b-instruct|$0.2/M|66K|bytedance-seed|bytedance-seed/seed-1.6-flash|$0.075/M|262K|bytedance-seed|bytedance-seed/seed-1.6|$0.25/M|262K|openai|openai/gpt-5.2-codex|$1.75/M|400K|allenai|allenai/molmo-2-8b:free|Free|37K|allenai|allenai/olmo-3.1-32b-instruct|$0.2/M|66K|bytedance-seed|bytedance-seed/seed-1.6-flash|$0.075/M|262K|bytedance-seed|bytedance-seed/seed-1.6|$0.25/M|262K|

Live model data for AI coding assistants

Your assistant's training data is outdated. index9 fixes that.

Browse Models

How it works

Your assistant's knowledge has a cutoff date. index9 doesn't.

MCP Server

Works in Cursor, Claude Desktop, VS Code, and any MCP-compatible client. One config, instant access.

Always current

Pricing, context windows, and capabilities for 300+ models. Synced continuously from OpenRouter.

Natural language search

Find models by description: 'fast vision model under $2/M tokens'. Filter by price, context, or capabilities.

Live model testing

Send identical prompts to 1–5 models. Compare outputs, latency, and cost in one request.

Why index9?

index9 is an MCP server your assistant calls directly—live model data appears inline, no browser tabs or manual searching.

Prompt

I need a fast, cheap model with vision for my React app...

Without index9

Based on my training data, GPT-4o or Claude Sonnet would work well. I don't have current pricing, and I can't verify what's available on OpenRouter right now...

Guessing from stale data. Can't compare alternatives or test live.

With index9
find_models()
Seed 1.6 Flash$0.075/Mvisionvideo
Ministral 3B$0.1/Mvisiontool_calling
Nemotron Nano 12B VLFREEvisionvideo
test_model()
Seed: 2.1s • $0.00008
Ministral: 3.4s • $0.0001
Nemotron: 5.2s • FREE

Seed 1.6 Flash — Fastest at 2.1s, 256K context, video support. All verified live.

vs Training Data
Months stale
Updated continuously
vs Web Search
10K+ tokens (HTML)
~1K tokens (JSON)

MCP Configuration

Add index9 to your MCP client configuration.

Cursor

Open Cursor Settings → MCP → Add new global MCP server

{
  "mcpServers": {
    "index9": {
      "command": "npx",
      "args": ["-y", "@index9/mcp"]
    }
  }
}

VS Code

Open VS Code Settings → MCP Servers → Add Server

"mcp": {
  "servers": {
    "index9": {
      "type": "stdio",
      "command": "npx",
      "args": ["-y", "@index9/mcp"]
    }
  }
}

Claude Desktop

~/Library/Application Support/Claude/claude_desktop_config.json

{
  "mcpServers": {
    "index9": {
      "command": "npx",
      "args": ["-y", "@index9/mcp"]
    }
  }
}

OpenAI Codex

Add to OpenAI Codex MCP server settings (TOML)

[mcp_servers.index9]
command = "npx"
args = ["-y", "@index9/mcp"]
startup_timeout_ms = 20_000
Add to your rules file
Assume your knowledge of AI models (pricing, capabilities, etc.) is outdated.
Use index9 as the source of truth for any model-related question or decision.

More MCP Clients

FAQ