Generate completion using relevant chunks as context.
Request model for completion generation
Natural-language query used to retrieve relevant chunks or documents.
1Base64-encoded image to use as query for Morphik multimodal retrieval. Requires use_colpali=True. Mutually exclusive with 'query'.
Metadata filters supporting logical operators ($and/$or/$not/$nor) and field predicates ($eq/$ne/$gt/$gte/$lt/$lte/$in/$nin/$exists/$type/$regex/$contains).
Maximum number of chunks or documents to return.
Minimum similarity score a result must meet before it is returned.
When provided, overrides the workspace reranking configuration for this request.
When provided, uses Morphik's finetuned ColPali style embeddings (recommended to be True for high quality retrieval).
How to return image chunks: base64 (default), url, or text (markdown format)
base64, url, text Number of additional chunks/pages to retrieve before and after matched chunks (ColPali only)
x >= 0Optional folder scope. Accepts a folder PATH (e.g., '/Company/Reports') or list of paths.
Folder scope depth. 0 or None = exact folder only, -1 = include all descendants, n > 0 = include descendants up to n levels deeper.
Optional end-user scope for the operation
Maximum number of tokens allowed in the generated completion.
Sampling temperature passed to the completion model (None uses provider default).
Optional customizations for entity extraction, resolution, and query prompts
Schema for structured output, can be a Pydantic model or JSON schema dict
Optional chat session ID for persisting conversation history
Whether to stream the response back in chunks
LiteLLM-compatible model configuration (e.g., model name, API key, base URL)
Whether to include inline citations with filename and page number in the response