Skip to main content
POST
/
query
JavaScript
import Morphik from 'morphik';

const client = new Morphik({
  apiKey: 'My API Key',
});

const completionResponse = await client.query.generateCompletion({ query: 'x' });

console.log(completionResponse.completion);
{
  "completion": "<string>",
  "usage": {},
  "finish_reason": "<string>",
  "sources": [],
  "metadata": {}
}

Headers

authorization
string

Body

application/json

Request model for completion generation

query
string
required
Minimum length: 1
filters
object | null
k
integer
default:4
Required range: x > 0
min_score
number
default:0
use_reranking
boolean | null
use_colpali
boolean | null
padding
integer
default:0

Number of additional chunks/pages to retrieve before and after matched chunks (ColPali only)

Required range: x >= 0
graph_name
string | null

Name of the graph to use for knowledge graph-enhanced retrieval

hop_depth
integer | null
default:1

Number of relationship hops to traverse in the graph

Required range: 1 <= x <= 3
include_paths
boolean | null
default:false

Whether to include relationship paths in the response

folder_name

Optional folder scope for the operation. Accepts a single folder name or a list of folder names.

end_user_id
string | null

Optional end-user scope for the operation

max_tokens
integer | null
temperature
number | null
prompt_overrides
object | null

Optional customizations for entity extraction, resolution, and query prompts Container for query-related prompt overrides.

Use this class when customizing prompts for query operations, which may include customizations for entity extraction, entity resolution, and the query/response generation itself.

This is the most feature-complete override class, supporting all customization types.

Available customizations:

  • entity_extraction: Customize how entities are identified in text
  • entity_resolution: Customize how entity variants are grouped
  • query: Customize response generation style, format, and tone

Each type has its own required placeholders. See the specific class documentation for details and examples.

schema

Schema for structured output, can be a Pydantic model or JSON schema dict

chat_id
string | null

Optional chat session ID for persisting conversation history

stream_response
boolean | null
default:false

Whether to stream the response back in chunks

llm_config
object | null

LiteLLM-compatible model configuration (e.g., model name, API key, base URL)

Response

Successful Response

Response from completion generation

completion
required
usage
object
required
finish_reason
string | null
sources
ChunkSource · object[]
metadata
object | null
I