MegaLLM Logo
MegaLLM
API Reference

Models API

REST API endpoint for retrieving available models and their specifications

Models API

The Models API provides access to all available models through MegaLLM's documentation site. This endpoint returns the same data displayed in the Models Catalog page.

Endpoint

GET /api/models

Response Format

The API returns a JSON response with the following structure:

{
  "success": true,
  "data": [
    {
      "id": "gpt-4o-mini",
      "object": "model",
      "type": "chat",
      "created_at": "2024-07-18T00:00:00.000Z",
      "owned_by": "openai",
      "display_name": "GPT-4o mini",
      "capabilities": {
        "supports_function_calling": true,
        "supports_vision": true,
        "supports_streaming": true,
        "supports_structured_output": true
      },
      "pricing": {
        "input_tokens_cost_per_million": 0.15,
        "output_tokens_cost_per_million": 0.6,
        "currency": "USD"
      },
      "context_length": 128000,
      "max_output_tokens": 16384
    }
  ],
  "total": 89,
  "lastUpdated": "2025-01-29T10:30:00.000Z"
}

Response Fields

FieldTypeDescription
successbooleanWhether the request was successful
dataarrayArray of model objects
totalnumberTotal number of models returned
lastUpdatedstringISO timestamp of when the data was last fetched

Model Object Fields

FieldTypeDescription
idstringModel identifier used in API calls
objectstringAlways "model"
typestringModel type (e.g., "chat", "embedding")
created_atstringISO timestamp of model creation
owned_bystringProvider/owner of the model
display_namestringHuman-readable model name
capabilitiesobjectModel capabilities object
pricingobjectPricing information
context_lengthnumberMaximum context window in tokens
max_output_tokensnumberMaximum output tokens

Capabilities Object

FieldTypeDescription
supports_function_callingbooleanFunction/tool calling support
supports_visionbooleanImage/vision processing support
supports_streamingbooleanResponse streaming support
supports_structured_outputbooleanStructured JSON output support

Pricing Object

FieldTypeDescription
input_tokens_cost_per_millionnumberCost per million input tokens
output_tokens_cost_per_millionnumberCost per million output tokens
currencystringPricing currency (always "USD")

Usage Examples

// Fetch all models
async function fetchModels() {
  try {
    const response = await fetch('/api/models');
    const { success, data, total } = await response.json();

    if (success) {
      console.log(`Found ${total} models:`);
      data.forEach(model => {
        console.log(`${model.id} - ${model.display_name}`);
      });
    }
  } catch (error) {
    console.error('Error fetching models:', error);
  }
}

// Filter models by provider
async function getOpenAIModels() {
  const response = await fetch('/api/models');
  const { data } = await response.json();

  const openaiModels = data.filter(model =>
    model.owned_by.toLowerCase().includes('openai')
  );

  return openaiModels;
}

// Find models with specific capabilities
async function getVisionModels() {
  const response = await fetch('/api/models');
  const { data } = await response.json();

  return data.filter(model =>
    model.capabilities.supports_vision
  );
}
import requests

# Fetch all models
def fetch_models():
    try:
        response = requests.get('https://yourdomain.com/api/models')
        data = response.json()

        if data['success']:
            print(f"Found {data['total']} models:")
            for model in data['data']:
                print(f"{model['id']} - {model['display_name']}")

    except Exception as e:
        print(f"Error fetching models: {e}")

# Filter models by price range
def get_budget_models(max_input_cost=1.0):
    response = requests.get('https://yourdomain.com/api/models')
    data = response.json()

    budget_models = [
        model for model in data['data']
        if model['pricing']['input_tokens_cost_per_million'] <= max_input_cost
    ]

    return budget_models

# Get models with function calling
def get_function_calling_models():
    response = requests.get('https://yourdomain.com/api/models')
    data = response.json()

    return [
        model for model in data['data']
        if model['capabilities']['supports_function_calling']
    ]
# Fetch all models
curl -X GET https://yourdomain.com/api/models

# Using jq to filter results
curl -s https://yourdomain.com/api/models | jq '.data[] | select(.owned_by | contains("openai"))'

# Get model count
curl -s https://yourdomain.com/api/models | jq '.total'

# Find cheapest input pricing
curl -s https://yourdomain.com/api/models | jq '.data | min_by(.pricing.input_tokens_cost_per_million)'

Error Responses

If the API encounters an error, it returns a response with success: false:

{
  "success": false,
  "error": "Failed to fetch models: Service unavailable",
  "data": [],
  "total": 0
}

Rate Limiting

This endpoint has no rate limiting as it serves cached data from the MegaLLM API.

Use Cases

This API endpoint is useful for:

  • Dynamic Model Selection: Programmatically choose models based on capabilities and pricing
  • Documentation Tools: Build dynamic documentation that stays up-to-date
  • Price Comparison: Compare costs across different models and providers
  • Feature Discovery: Find models with specific capabilities like vision or function calling
  • Integration Testing: Validate available models before making API calls

Live Data: This endpoint fetches real-time data from the MegaLLM API, so the available models and pricing may change over time.

Model IDs: Always use the id field (not display_name) when making actual API calls to the MegaLLM service.