Skip to content

Extract

Create Extract Job
client.extract.create(ExtractCreateParams { document_input_value, organization_id, project_id, 3 more } params, RequestOptionsoptions?): ExtractV2Job { id, created_at, document_input_value, 9 more }
POST/api/v2/extract
List Extract Jobs
client.extract.list(ExtractListParams { configuration_id, created_at_on_or_after, created_at_on_or_before, 9 more } query?, RequestOptionsoptions?): PaginatedCursor<ExtractV2Job { id, created_at, document_input_value, 9 more } >
GET/api/v2/extract
Get Extract Job
client.extract.get(stringjobID, ExtractGetParams { expand, organization_id, project_id } query?, RequestOptionsoptions?): ExtractV2Job { id, created_at, document_input_value, 9 more }
GET/api/v2/extract/{job_id}
Delete Extract Job
client.extract.delete(stringjobID, ExtractDeleteParams { organization_id, project_id } params?, RequestOptionsoptions?): ExtractDeleteResponse
DELETE/api/v2/extract/{job_id}
Validate Extraction Schema
client.extract.validateSchema(ExtractValidateSchemaParams { data_schema } body, RequestOptionsoptions?): ExtractV2SchemaValidateResponse { data_schema }
POST/api/v2/extract/schema/validation
Generate Extraction Schema
client.extract.generateSchema(ExtractGenerateSchemaParams { organization_id, project_id, data_schema, 3 more } params, RequestOptionsoptions?): ExtractGenerateSchemaResponse { name, parameters }
POST/api/v2/extract/schema/generate
ModelsExpand Collapse
ExtractConfiguration { data_schema, cite_sources, confidence_scores, 9 more }

Extract configuration combining parse and extract settings.

data_schema: Record<string, Record<string, unknown> | Array<unknown> | string | 2 more | null>

JSON Schema defining the fields to extract. Validate with the /schema/validate endpoint first.

Accepts one of the following:
Record<string, unknown>
Array<unknown>
string
number
boolean
cite_sources?: boolean

Include citations in results

confidence_scores?: boolean

Include confidence scores in results

extract_version?: string

Extract algorithm version. Use 'latest' or a date string.

extraction_target?: "per_doc" | "per_page" | "per_table_row"

Granularity of extraction: per_doc returns one object per document, per_page returns one object per page, per_table_row returns one object per table row

Accepts one of the following:
"per_doc"
"per_page"
"per_table_row"
lang?: string

ISO 639-1 language code for the document

max_pages?: number | null

Maximum number of pages to process. Omit for no limit.

minimum1
parse_config_id?: string | null

Saved parse configuration ID to control how the document is parsed before extraction

parse_tier?: string | null

Parse tier to use before extraction (fast, cost_effective, or agentic)

system_prompt?: string | null

Custom system prompt to guide extraction behavior

target_pages?: string | null

Comma-separated page numbers or ranges to process (1-based). Omit to process all pages.

tier?: "cost_effective" | "agentic"

Extract tier: cost_effective (5 credits/page) or agentic (15 credits/page)

Accepts one of the following:
"cost_effective"
"agentic"
ExtractJobMetadata { field_metadata, parse_job_id, parse_tier }

Extraction metadata.

field_metadata?: ExtractedFieldMetadata { document_metadata, page_metadata, row_metadata } | null

Metadata for extracted fields including document, page, and row level info.

document_metadata?: Record<string, Record<string, unknown> | Array<unknown> | string | 2 more | null> | null

Document-level metadata (citations, confidence) keyed by field name

Accepts one of the following:
Record<string, unknown>
Array<unknown>
string
number
boolean
page_metadata?: Array<Record<string, Record<string, unknown> | Array<unknown> | string | 2 more | null>> | null

Per-page metadata when extraction_target is per_page

Accepts one of the following:
Record<string, unknown>
Array<unknown>
string
number
boolean
row_metadata?: Array<Record<string, Record<string, unknown> | Array<unknown> | string | 2 more | null>> | null

Per-row metadata when extraction_target is per_table_row

Accepts one of the following:
Record<string, unknown>
Array<unknown>
string
number
boolean
parse_job_id?: string | null

Reference to the ParseJob ID used for parsing

parse_tier?: string | null

Parse tier used for parsing the document

ExtractJobUsage { num_document_tokens, num_output_tokens, num_pages_extracted }

Extraction usage metrics.

num_document_tokens?: number | null

Number of document tokens

num_output_tokens?: number | null

Number of output tokens

num_pages_extracted?: number | null

Number of pages extracted

ExtractV2Job { id, created_at, document_input_value, 9 more }

An extraction job.

id: string

Unique job identifier (job_id)

created_at: string

Creation timestamp

formatdate-time
document_input_value: string

File ID or parse job ID that was extracted

project_id: string

Project this job belongs to

status: string

Current job status.

  • PENDING — queued, not yet started
  • RUNNING — actively processing
  • COMPLETED — finished successfully
  • FAILED — terminated with an error
  • CANCELLED — cancelled by user
updated_at: string

Last update timestamp

formatdate-time
configuration?: ExtractConfiguration { data_schema, cite_sources, confidence_scores, 9 more } | null

Extract configuration combining parse and extract settings.

data_schema: Record<string, Record<string, unknown> | Array<unknown> | string | 2 more | null>

JSON Schema defining the fields to extract. Validate with the /schema/validate endpoint first.

Accepts one of the following:
Record<string, unknown>
Array<unknown>
string
number
boolean
cite_sources?: boolean

Include citations in results

confidence_scores?: boolean

Include confidence scores in results

extract_version?: string

Extract algorithm version. Use 'latest' or a date string.

extraction_target?: "per_doc" | "per_page" | "per_table_row"

Granularity of extraction: per_doc returns one object per document, per_page returns one object per page, per_table_row returns one object per table row

Accepts one of the following:
"per_doc"
"per_page"
"per_table_row"
lang?: string

ISO 639-1 language code for the document

max_pages?: number | null

Maximum number of pages to process. Omit for no limit.

minimum1
parse_config_id?: string | null

Saved parse configuration ID to control how the document is parsed before extraction

parse_tier?: string | null

Parse tier to use before extraction (fast, cost_effective, or agentic)

system_prompt?: string | null

Custom system prompt to guide extraction behavior

target_pages?: string | null

Comma-separated page numbers or ranges to process (1-based). Omit to process all pages.

tier?: "cost_effective" | "agentic"

Extract tier: cost_effective (5 credits/page) or agentic (15 credits/page)

Accepts one of the following:
"cost_effective"
"agentic"
configuration_id?: string | null

Saved extract configuration ID used for this job, if any

error_message?: string | null

Error details when status is FAILED

extract_metadata?: ExtractJobMetadata { field_metadata, parse_job_id, parse_tier } | null

Extraction metadata.

field_metadata?: ExtractedFieldMetadata { document_metadata, page_metadata, row_metadata } | null

Metadata for extracted fields including document, page, and row level info.

document_metadata?: Record<string, Record<string, unknown> | Array<unknown> | string | 2 more | null> | null

Document-level metadata (citations, confidence) keyed by field name

Accepts one of the following:
Record<string, unknown>
Array<unknown>
string
number
boolean
page_metadata?: Array<Record<string, Record<string, unknown> | Array<unknown> | string | 2 more | null>> | null

Per-page metadata when extraction_target is per_page

Accepts one of the following:
Record<string, unknown>
Array<unknown>
string
number
boolean
row_metadata?: Array<Record<string, Record<string, unknown> | Array<unknown> | string | 2 more | null>> | null

Per-row metadata when extraction_target is per_table_row

Accepts one of the following:
Record<string, unknown>
Array<unknown>
string
number
boolean
parse_job_id?: string | null

Reference to the ParseJob ID used for parsing

parse_tier?: string | null

Parse tier used for parsing the document

extract_result?: Record<string, Record<string, unknown> | Array<unknown> | string | 2 more | null> | Array<Record<string, Record<string, unknown> | Array<unknown> | string | 2 more | null>> | null

Extracted data conforming to the data_schema. Returns a single object for per_doc, or an array for per_page / per_table_row.

Accepts one of the following:
Record<string, Record<string, unknown> | Array<unknown> | string | 2 more | null>
Record<string, unknown>
Array<unknown>
string
number
boolean
Array<Record<string, Record<string, unknown> | Array<unknown> | string | 2 more | null>>
Record<string, unknown>
Array<unknown>
string
number
boolean
metadata?: Metadata | null

Job-level metadata.

usage?: ExtractJobUsage { num_document_tokens, num_output_tokens, num_pages_extracted } | null

Extraction usage metrics.

num_document_tokens?: number | null

Number of document tokens

num_output_tokens?: number | null

Number of output tokens

num_pages_extracted?: number | null

Number of pages extracted

ExtractV2JobCreate { document_input_value, configuration, configuration_id, webhook_configurations }

Request to create an extraction job. Provide configuration_id or inline configuration.

document_input_value: string

File ID or Parse Job ID to extract from

maxLength200
configuration?: ExtractConfiguration { data_schema, cite_sources, confidence_scores, 9 more } | null

Extract configuration combining parse and extract settings.

data_schema: Record<string, Record<string, unknown> | Array<unknown> | string | 2 more | null>

JSON Schema defining the fields to extract. Validate with the /schema/validate endpoint first.

Accepts one of the following:
Record<string, unknown>
Array<unknown>
string
number
boolean
cite_sources?: boolean

Include citations in results

confidence_scores?: boolean

Include confidence scores in results

extract_version?: string

Extract algorithm version. Use 'latest' or a date string.

extraction_target?: "per_doc" | "per_page" | "per_table_row"

Granularity of extraction: per_doc returns one object per document, per_page returns one object per page, per_table_row returns one object per table row

Accepts one of the following:
"per_doc"
"per_page"
"per_table_row"
lang?: string

ISO 639-1 language code for the document

max_pages?: number | null

Maximum number of pages to process. Omit for no limit.

minimum1
parse_config_id?: string | null

Saved parse configuration ID to control how the document is parsed before extraction

parse_tier?: string | null

Parse tier to use before extraction (fast, cost_effective, or agentic)

system_prompt?: string | null

Custom system prompt to guide extraction behavior

target_pages?: string | null

Comma-separated page numbers or ranges to process (1-based). Omit to process all pages.

tier?: "cost_effective" | "agentic"

Extract tier: cost_effective (5 credits/page) or agentic (15 credits/page)

Accepts one of the following:
"cost_effective"
"agentic"
configuration_id?: string | null

Saved extract configuration ID (mutually exclusive with configuration)

webhook_configurations?: Array<WebhookConfiguration> | null

Outbound webhook endpoints to notify on job status changes

webhook_events?: Array<"extract.pending" | "extract.success" | "extract.error" | 14 more> | null

Events to subscribe to (e.g. 'parse.success', 'extract.error'). If null, all events are delivered.

Accepts one of the following:
"extract.pending"
"extract.success"
"extract.error"
"extract.partial_success"
"extract.cancelled"
"parse.pending"
"parse.running"
"parse.success"
"parse.error"
"parse.partial_success"
"parse.cancelled"
"classify.pending"
"classify.success"
"classify.error"
"classify.partial_success"
"classify.cancelled"
"unmapped_event"
webhook_headers?: Record<string, string> | null

Custom HTTP headers sent with each webhook request (e.g. auth tokens)

webhook_output_format?: string | null

Response format sent to the webhook: 'string' (default) or 'json'

webhook_url?: string | null

URL to receive webhook POST notifications

ExtractV2JobQueryResponse { items, next_page_token, total_size }

Paginated list of extraction jobs.

items: Array<ExtractV2Job { id, created_at, document_input_value, 9 more } >

The list of items.

id: string

Unique job identifier (job_id)

created_at: string

Creation timestamp

formatdate-time
document_input_value: string

File ID or parse job ID that was extracted

project_id: string

Project this job belongs to

status: string

Current job status.

  • PENDING — queued, not yet started
  • RUNNING — actively processing
  • COMPLETED — finished successfully
  • FAILED — terminated with an error
  • CANCELLED — cancelled by user
updated_at: string

Last update timestamp

formatdate-time
configuration?: ExtractConfiguration { data_schema, cite_sources, confidence_scores, 9 more } | null

Extract configuration combining parse and extract settings.

data_schema: Record<string, Record<string, unknown> | Array<unknown> | string | 2 more | null>

JSON Schema defining the fields to extract. Validate with the /schema/validate endpoint first.

Accepts one of the following:
Record<string, unknown>
Array<unknown>
string
number
boolean
cite_sources?: boolean

Include citations in results

confidence_scores?: boolean

Include confidence scores in results

extract_version?: string

Extract algorithm version. Use 'latest' or a date string.

extraction_target?: "per_doc" | "per_page" | "per_table_row"

Granularity of extraction: per_doc returns one object per document, per_page returns one object per page, per_table_row returns one object per table row

Accepts one of the following:
"per_doc"
"per_page"
"per_table_row"
lang?: string

ISO 639-1 language code for the document

max_pages?: number | null

Maximum number of pages to process. Omit for no limit.

minimum1
parse_config_id?: string | null

Saved parse configuration ID to control how the document is parsed before extraction

parse_tier?: string | null

Parse tier to use before extraction (fast, cost_effective, or agentic)

system_prompt?: string | null

Custom system prompt to guide extraction behavior

target_pages?: string | null

Comma-separated page numbers or ranges to process (1-based). Omit to process all pages.

tier?: "cost_effective" | "agentic"

Extract tier: cost_effective (5 credits/page) or agentic (15 credits/page)

Accepts one of the following:
"cost_effective"
"agentic"
configuration_id?: string | null

Saved extract configuration ID used for this job, if any

error_message?: string | null

Error details when status is FAILED

extract_metadata?: ExtractJobMetadata { field_metadata, parse_job_id, parse_tier } | null

Extraction metadata.

field_metadata?: ExtractedFieldMetadata { document_metadata, page_metadata, row_metadata } | null

Metadata for extracted fields including document, page, and row level info.

document_metadata?: Record<string, Record<string, unknown> | Array<unknown> | string | 2 more | null> | null

Document-level metadata (citations, confidence) keyed by field name

Accepts one of the following:
Record<string, unknown>
Array<unknown>
string
number
boolean
page_metadata?: Array<Record<string, Record<string, unknown> | Array<unknown> | string | 2 more | null>> | null

Per-page metadata when extraction_target is per_page

Accepts one of the following:
Record<string, unknown>
Array<unknown>
string
number
boolean
row_metadata?: Array<Record<string, Record<string, unknown> | Array<unknown> | string | 2 more | null>> | null

Per-row metadata when extraction_target is per_table_row

Accepts one of the following:
Record<string, unknown>
Array<unknown>
string
number
boolean
parse_job_id?: string | null

Reference to the ParseJob ID used for parsing

parse_tier?: string | null

Parse tier used for parsing the document

extract_result?: Record<string, Record<string, unknown> | Array<unknown> | string | 2 more | null> | Array<Record<string, Record<string, unknown> | Array<unknown> | string | 2 more | null>> | null

Extracted data conforming to the data_schema. Returns a single object for per_doc, or an array for per_page / per_table_row.

Accepts one of the following:
Record<string, Record<string, unknown> | Array<unknown> | string | 2 more | null>
Record<string, unknown>
Array<unknown>
string
number
boolean
Array<Record<string, Record<string, unknown> | Array<unknown> | string | 2 more | null>>
Record<string, unknown>
Array<unknown>
string
number
boolean
metadata?: Metadata | null

Job-level metadata.

usage?: ExtractJobUsage { num_document_tokens, num_output_tokens, num_pages_extracted } | null

Extraction usage metrics.

num_document_tokens?: number | null

Number of document tokens

num_output_tokens?: number | null

Number of output tokens

num_pages_extracted?: number | null

Number of pages extracted

next_page_token?: string | null

A token, which can be sent as page_token to retrieve the next page. If this field is omitted, there are no subsequent pages.

total_size?: number | null

The total number of items available. This is only populated when specifically requested. The value may be an estimate and can be used for display purposes only.

ExtractV2SchemaGenerateRequest { data_schema, file_id, name, prompt }

Request schema for generating an extraction schema.

data_schema?: Record<string, Record<string, unknown> | Array<unknown> | string | 2 more | null> | null

Optional schema to validate, refine, or extend

Accepts one of the following:
Record<string, unknown>
Array<unknown>
string
number
boolean
file_id?: string | null

Optional file ID to analyze for schema generation

name?: string | null

Name for the generated configuration (auto-generated if omitted)

maxLength255
prompt?: string | null

Natural language description of the data structure to extract

ExtractV2SchemaValidateRequest { data_schema }

Request schema for validating an extraction schema.

data_schema: Record<string, Record<string, unknown> | Array<unknown> | string | 2 more | null>

JSON Schema to validate for use with extract jobs

Accepts one of the following:
Record<string, unknown>
Array<unknown>
string
number
boolean
ExtractV2SchemaValidateResponse { data_schema }

Response schema for schema validation.

data_schema: Record<string, Record<string, unknown> | Array<unknown> | string | 2 more | null>

Validated JSON Schema, ready for use in extract jobs

Accepts one of the following:
Record<string, unknown>
Array<unknown>
string
number
boolean
ExtractedFieldMetadata { document_metadata, page_metadata, row_metadata }

Metadata for extracted fields including document, page, and row level info.

document_metadata?: Record<string, Record<string, unknown> | Array<unknown> | string | 2 more | null> | null

Document-level metadata (citations, confidence) keyed by field name

Accepts one of the following:
Record<string, unknown>
Array<unknown>
string
number
boolean
page_metadata?: Array<Record<string, Record<string, unknown> | Array<unknown> | string | 2 more | null>> | null

Per-page metadata when extraction_target is per_page

Accepts one of the following:
Record<string, unknown>
Array<unknown>
string
number
boolean
row_metadata?: Array<Record<string, Record<string, unknown> | Array<unknown> | string | 2 more | null>> | null

Per-row metadata when extraction_target is per_table_row

Accepts one of the following:
Record<string, unknown>
Array<unknown>
string
number
boolean