Quick Configuration
- Via Settings UI
- Via settings.json
- Using Environment Variables
Step-by-Step Setup
1
Open VSCode Settings
- Press
Ctrl+Shift+P(Windows/Linux) orCmd+Shift+P(Mac) - Type:
Preferences: Open Settings (UI) - Search for:
Kilocode
2
Configure API Provider
- API Provider: Select
Custom - Provider Name:
MegaLLM - Base URL:
https://ai.megallm.io/v1 - API Key:
sk-mega-your-api-key-here
3
Select Default Model
- Default Model:
gpt-5(or any supported model) - Temperature:
0.3(lower = more deterministic) - Max Tokens:
500(for completions)
4
Enable Features
- Enable AutoComplete
- Enable Inline Chat
- Enable Code Actions
- Enable Suggestions
Scenario Examples
Scenario 1: First-Time Installation
Complete setup from scratch:1
Install Kilocode Extension
- Open VSCode
- Go to Extensions:
Ctrl+Shift+X/Cmd+Shift+X - Search:
Kilocode - Click Install
- Reload VSCode window
2
Get MegaLLM API Key
- Visit MegaLLM Dashboard
- Navigate to API Keys section
- Click Create New Key
- Copy the key (starts with
sk-mega-) - Store it securely
3
Configure Extension
Open settings (
Ctrl+, / Cmd+,) and add:4
Test Configuration
- Create a new file:
test.js - Type a comment:
// function to calculate fibonacci - Press
Tabto trigger completion - Should see AI-generated code
Scenario 2: Team Project Configuration
Set up Kilocode for entire development team: Project Structure:.vscode/settings.json (committed to git):
README.md:
-
Or create local settings (not committed):
.vscode/settings.local.json: - Reload VSCode and start coding!
Verification
Type// hello world function and press Tab.
Should see AI-generated code.
Scenario 3: Project-Specific Model Selection
Use different models for different projects: Python Data Science Project:.vscode/settings.json:
.vscode/settings.json:
.vscode/settings.json:
Scenario 4: Multi-Model Workflow
Switch models dynamically based on task: Configuration:- Morning (rapid prototyping): Use
fastprofile with GPT-4o-mini - Afternoon (quality code): Use
qualityprofile with Claude Opus - Documentation: Use
creativeprofile with higher temperature
Ctrl+Shift+P/Cmd+Shift+PKilocode: Switch Model Profile- Select:
fast,quality, orcreative
Scenario 5: Migration from GitHub Copilot
Switching from Copilot to Kilocode with MegaLLM: Current Setup (Copilot):1
Disable Copilot
2
Install Kilocode
VSCode Extensions → Search “Kilocode” → Install
3
Configure MegaLLM
4
Compare Experience
Benefits over Copilot:
- Access to multiple models (GPT, Claude, Gemini)
- Better pricing and no seat limits
- Inline chat for explanations
- Custom model selection per project
- Code actions beyond completion
Configuration Options
Complete Reference
Model Selection Guide
| Task | Recommended Model | Reason |
|---|---|---|
| Code Completion | gpt-4o-mini | Fast, cost-effective |
| Complex Logic | claude-opus-4-1-20250805 | Superior reasoning |
| Web Development | gpt-5 | Excellent JS/TS/React |
| Data Science | claude-sonnet-4 | Strong analysis |
| Documentation | gpt-5 | Clear explanations |
| Algorithms | gemini-2.5-pro | Mathematical precision |
Verification
Test 1: Basic Completion
Test 2: Inline Chat
- Select a function
- Press
Ctrl+K(orCmd+Kon Mac) - Type:
Explain this function - Should see explanation in chat panel
Test 3: Code Actions
- Right-click on code
- Should see “Kilocode Actions” in context menu
- Options: Explain, Improve, Generate Tests, etc.
Test 4: Status Bar
Check bottom-right of VSCode:- Should show:
Kilocode: Connected - Model name:
gpt-5(or your selected model) - Click to see connection details
Troubleshooting
Completions not appearing
Completions not appearing
Symptoms:
- No suggestions when typing
- Status bar shows “Disconnected”
-
Check API key:
-
Verify configuration:
-
Reload VSCode:
Ctrl+Shift+P/Cmd+Shift+P- Run:
Developer: Reload Window
-
Check extension is enabled:
- Extensions panel
- Search: Kilocode
- Should show “Enabled”
-
Test API manually:
Wrong or poor quality completions
Wrong or poor quality completions
Symptoms:
- Completions are incorrect
- Suggestions don’t match code style
- Irrelevant responses
-
Adjust temperature:
-
Try different model:
-
Increase context:
-
Add project-specific prompt:
High latency / slow completions
High latency / slow completions
Symptoms:
- Long wait for suggestions
- Timeout errors
-
Use faster model:
-
Reduce max tokens:
-
Increase debounce delay:
-
Check network:
API key not recognized
API key not recognized
Symptoms:
- “Invalid API key” error
- 401 Unauthorized
-
Verify key format:
- Must start with
sk-mega- - At least 60 characters
- No spaces or quotes
- Must start with
-
Check key is active:
- Login to Dashboard
- Go to API Keys
- Verify key is not revoked/expired
-
Test key directly:
-
Regenerate if needed:
- Dashboard → API Keys → Create New
- Update configuration with new key
Conflicts with other extensions
Conflicts with other extensions
Symptoms:
- Kilocode completions conflict with other AI tools
- Multiple suggestions appearing
-
Disable conflicting extensions:
-
Adjust trigger keys:
-
Set priority:
Best Practices
Use Environment Variables
Keep API keys in env vars, reference with
${env:MEGALLM_API_KEY}Project-Specific Models
Configure different models in
.vscode/settings.json per projectLower Temperature for Code
Use 0.2-0.4 for code generation, 0.7-0.9 for documentation
Monitor Token Usage
Check Dashboard to optimize costs
Advanced Tips
Custom Keyboard Shortcuts
Add tokeybindings.json:
Workspace-Specific Prompts
.vscode/settings.json:

