Documentation Index
Fetch the complete documentation index at: https://morphik.ai/docs/llms.txt
Use this file to discover all available pages before exploring further.
Morphik uses the morphik.toml configuration file to control all aspects of the system.
Model Configuration
Morphik uses LiteLLM to route to 100+ LLM providers with a unified interface. This means you can use models from OpenAI, Anthropic, Google, AWS Bedrock, Azure, Hugging Face, and many more - all with the same simple configuration format.
Example Configurations
In your morphik.toml, define models in the LiteLLM format:
[registered_models]
# OpenAI
openai_gpt4 = { model_name = "gpt-4" }
openai_gpt4_mini = { model_name = "gpt-4-0125-preview" }
# Anthropic
claude_3_opus = { model_name = "claude-3-opus-20240229" }
claude_3_sonnet = { model_name = "claude-3-sonnet-20240229" }
# Google
gemini_pro = { model_name = "gemini/gemini-pro" }
gemini_flash = { model_name = "gemini/gemini-1.5-flash" }
# Azure OpenAI
azure_gpt4 = { model_name = "azure/gpt-4", api_base = "YOUR_AZURE_URL", api_key = "YOUR_KEY" }
# AWS Bedrock
bedrock_claude = { model_name = "bedrock/anthropic.claude-v2" }
# And 100+ more providers...
Then reference these models throughout your configuration:
[completion]
model = "claude_3_opus" # Use any registered model
[embedding]
model = "openai_embedding" # Use any registered embedding model
Local LLMs
Morphik can also run entirely with local LLMs. We directly integrate with two major local LLM servers:
Ollama
Ollama - Run Llama, Mistral, Gemma and other models locally.
[registered_models]
# Ollama models
ollama_llama = { model_name = "ollama_chat/llama3.2", api_base = "http://localhost:11434" }
ollama_qwen_vision = { model_name = "ollama_chat/qwen2.5:72b", api_base = "http://localhost:11434", vision = true }
ollama_embedding = { model_name = "ollama/nomic-embed-text", api_base = "http://localhost:11434" }
🍋 Lemonade
Lemonade Server - Optimized local LLM server for AMD GPUs and NPUs.
[registered_models]
# Lemonade models
lemonade_qwen = { model_name = "openai/Qwen2.5-VL-7B-Instruct-GGUF", api_base = "http://localhost:8020/api/v1", vision = true }
lemonade_embedding = { model_name = "openai/nomic-embed-text-v1-GGUF", api_base = "http://localhost:8020/api/v1" }
Docker Deployments
When running Morphik in Docker:
- Local services: Use
http://host.docker.internal:PORT
- Both in Docker: Use container names (e.g.,
http://ollama:11434)
Need Help?
- Join our Discord community
- Check GitHub for issues