query

Generate completion using relevant chunks as context.

def query(
    query: str,
    filters: Optional[Dict[str, Any]] = None,
    k: int = 4,
    min_score: float = 0.0,
    max_tokens: Optional[int] = None,
    temperature: Optional[float] = None,
    use_colpali: bool = True,
) -> CompletionResponse

Parameters

  • query (str): Query text
  • filters (Dict[str, Any], optional): Optional metadata filters
  • k (int, optional): Number of chunks to use as context. Defaults to 4.
  • min_score (float, optional): Minimum similarity threshold. Defaults to 0.0.
  • max_tokens (int, optional): Maximum tokens in completion
  • temperature (float, optional): Model temperature
  • use_colpali (bool, optional): Whether to use ColPali-style embedding model to generate the completion (only works for documents ingested with use_colpali=True). Defaults to True.

Returns

  • CompletionResponse: Response containing the completion and source information

Example

from databridge.sync import DataBridge

db = DataBridge()

response = db.query(
    "What are the key findings about customer satisfaction?",
    filters={"department": "research"},
    temperature=0.7
)

print(response.completion)

# Print the sources used for the completion
for source in response.sources:
    print(f"Document ID: {source.document_id}, Chunk: {source.chunk_number}, Score: {source.score}")

CompletionResponse Properties

The CompletionResponse object returned by this method has the following properties:

  • completion (str): The generated completion text
  • usage (Dict[str, int]): Token usage information
  • sources (List[ChunkSource]): Sources of chunks used in the completion