Skip to main content

Prerequisites

  • MegaLLM API key (Get one here)
  • Python 3.7+ or Node.js 14+ installed
  • Basic programming knowledge

Step 1: Create Project

  • Python
  • JavaScript
# Create directory
mkdir my-first-ai-app
cd my-first-ai-app

# Create virtual environment
python -m venv venv
source venv/bin/activate  # On Windows: venv\Scripts\activate

# Install dependencies
pip install openai python-dotenv

Step 2: Store API Key

Create a .env file:
MEGALLM_API_KEY=your-api-key-here
Add .env to .gitignore to avoid committing your API key!

Step 3: Basic Request

  • Python
  • JavaScript
Create app.py:
import os
from dotenv import load_dotenv
from openai import OpenAI

# Load environment variables
load_dotenv()

# Initialize client
client = OpenAI(
    base_url="https://ai.megallm.io/v1",
    api_key=os.getenv("MEGALLM_API_KEY")
)

# Make a request
response = client.chat.completions.create(
    model="gpt-4",
    messages=[
        {"role": "user", "content": "What is MegaLLM?"}
    ]
)

# Print response
print(response.choices[0].message.content)
Run it:
python app.py

Step 4: Add Conversation Context

Let’s make it conversational:
  • Python
  • JavaScript
import os
from dotenv import load_dotenv
from openai import OpenAI

load_dotenv()

client = OpenAI(
    base_url="https://ai.megallm.io/v1",
    api_key=os.getenv("MEGALLM_API_KEY")
)

# Conversation history
messages = [
    {"role": "system", "content": "You are a helpful assistant."},
    {"role": "user", "content": "What is Python?"}
]

# First response
response = client.chat.completions.create(
    model="gpt-4",
    messages=messages
)

# Add to history
assistant_message = response.choices[0].message.content
messages.append({"role": "assistant", "content": assistant_message})
print(f"Assistant: {assistant_message}\n")

# Follow-up question
messages.append({"role": "user", "content": "What are its key features?"})

response = client.chat.completions.create(
    model="gpt-4",
    messages=messages
)

print(f"Assistant: {response.choices[0].message.content}")

Step 5: Try Different Models

Switch models by changing the model parameter:
  • Python
  • JavaScript
models = ["gpt-4", "claude-3.5-sonnet", "gemini-2.5-pro"]

for model in models:
    print(f"\n--- Using {model} ---")
    response = client.chat.completions.create(
        model=model,
        messages=[
            {"role": "user", "content": "Explain quantum computing in one sentence."}
        ]
    )
    print(response.choices[0].message.content)

Step 6: Add Parameters

Customize the response with parameters:
response = client.chat.completions.create(
    model="gpt-4",
    messages=[
        {"role": "user", "content": "Write a short poem about AI"}
    ],
    temperature=0.9,      # Higher = more creative
    max_tokens=100,       # Limit response length
    top_p=0.95,          # Nucleus sampling
    frequency_penalty=0.5 # Reduce repetition
)

Step 7: Error Handling

Add proper error handling:
  • Python
  • JavaScript
from openai import OpenAI, AuthenticationError, RateLimitError

try:
    response = client.chat.completions.create(
        model="gpt-4",
        messages=[{"role": "user", "content": "Hello!"}]
    )
    print(response.choices[0].message.content)

except AuthenticationError:
    print("<Icon icon="xmark" /> Invalid API key")
except RateLimitError:
    print("<Icon icon="xmark" /> Rate limit exceeded")
except Exception as e:
    print(f"<Icon icon="xmark" /> Error: {e}")

Step 8: Interactive Chat

Build a simple chatbot:
  • Python
  • JavaScript
import os
from dotenv import load_dotenv
from openai import OpenAI

load_dotenv()

client = OpenAI(
    base_url="https://ai.megallm.io/v1",
    api_key=os.getenv("MEGALLM_API_KEY")
)

messages = [
    {"role": "system", "content": "You are a helpful assistant."}
]

print("Chat with AI (type 'quit' to exit)\n")

while True:
    user_input = input("You: ")

    if user_input.lower() == 'quit':
        break

    messages.append({"role": "user", "content": user_input})

    response = client.chat.completions.create(
        model="gpt-4",
        messages=messages
    )

    assistant_message = response.choices[0].message.content
    messages.append({"role": "assistant", "content": assistant_message})

    print(f"AI: {assistant_message}\n")

Understanding the Response

The API returns a rich response object:
{
  "id": "chatcmpl-123",
  "object": "chat.completion",
  "created": 1677652288,
  "model": "gpt-4",
  "choices": [
    {
      "index": 0,
      "message": {
        "role": "assistant",
        "content": "Hello! How can I help you today?"
      },
      "finish_reason": "stop"
    }
  ],
  "usage": {
    "prompt_tokens": 9,
    "completion_tokens": 12,
    "total_tokens": 21
  }
}

Next Steps

Troubleshooting

Make sure you installed the SDK:
pip install openai  # Python
npm install openai  # JavaScript
  • Check your API key is correct
  • Verify .env file is in the same directory
  • Make sure you called load_dotenv() (Python) or dotenv.config() (JS)
  • You’re making too many requests
  • Add delays between requests
  • Consider upgrading your plan
  • Try a faster model like gpt-3.5-turbo
  • Reduce max_tokens
  • Use streaming for better UX

Need Help?