Skip to content

Extract

Create Extract Job
extract.create(ExtractCreateParams**kwargs) -> ExtractV2Job
POST/api/v2/extract
List Extract Jobs
extract.list(ExtractListParams**kwargs) -> SyncPaginatedCursor[ExtractV2Job]
GET/api/v2/extract
Get Extract Job
extract.get(strjob_id, ExtractGetParams**kwargs) -> ExtractV2Job
GET/api/v2/extract/{job_id}
Delete Extract Job
extract.delete(strjob_id, ExtractDeleteParams**kwargs) -> object
DELETE/api/v2/extract/{job_id}
Validate Extraction Schema
extract.validate_schema(ExtractValidateSchemaParams**kwargs) -> ExtractV2SchemaValidateResponse
POST/api/v2/extract/schema/validation
Generate Extraction Schema
extract.generate_schema(ExtractGenerateSchemaParams**kwargs) -> ExtractGenerateSchemaResponse
POST/api/v2/extract/schema/generate
ModelsExpand Collapse
class ExtractConfiguration:

Extract configuration combining parse and extract settings.

data_schema: Dict[str, Union[Dict[str, object], List[object], str, 3 more]]

JSON Schema defining the fields to extract. Validate with the /schema/validate endpoint first.

Accepts one of the following:
Dict[str, object]
List[object]
str
float
bool
cite_sources: Optional[bool]

Include citations in results

confidence_scores: Optional[bool]

Include confidence scores in results

extract_version: Optional[str]

Extract algorithm version. Use 'latest' or a date string.

extraction_target: Optional[Literal["per_doc", "per_page", "per_table_row"]]

Granularity of extraction: per_doc returns one object per document, per_page returns one object per page, per_table_row returns one object per table row

Accepts one of the following:
"per_doc"
"per_page"
"per_table_row"
lang: Optional[str]

ISO 639-1 language code for the document

max_pages: Optional[int]

Maximum number of pages to process. Omit for no limit.

minimum1
parse_config_id: Optional[str]

Saved parse configuration ID to control how the document is parsed before extraction

parse_tier: Optional[str]

Parse tier to use before extraction (fast, cost_effective, or agentic)

system_prompt: Optional[str]

Custom system prompt to guide extraction behavior

target_pages: Optional[str]

Comma-separated page numbers or ranges to process (1-based). Omit to process all pages.

tier: Optional[Literal["cost_effective", "agentic"]]

Extract tier: cost_effective (5 credits/page) or agentic (15 credits/page)

Accepts one of the following:
"cost_effective"
"agentic"
class ExtractJobMetadata:

Extraction metadata.

field_metadata: Optional[ExtractedFieldMetadata]

Metadata for extracted fields including document, page, and row level info.

document_metadata: Optional[Dict[str, Union[Dict[str, object], List[object], str, 3 more]]]

Document-level metadata (citations, confidence) keyed by field name

Accepts one of the following:
Dict[str, object]
List[object]
str
float
bool
page_metadata: Optional[List[Dict[str, Union[Dict[str, object], List[object], str, 3 more]]]]

Per-page metadata when extraction_target is per_page

Accepts one of the following:
Dict[str, object]
List[object]
str
float
bool
row_metadata: Optional[List[Dict[str, Union[Dict[str, object], List[object], str, 3 more]]]]

Per-row metadata when extraction_target is per_table_row

Accepts one of the following:
Dict[str, object]
List[object]
str
float
bool
parse_job_id: Optional[str]

Reference to the ParseJob ID used for parsing

parse_tier: Optional[str]

Parse tier used for parsing the document

class ExtractJobUsage:

Extraction usage metrics.

num_document_tokens: Optional[int]

Number of document tokens

num_output_tokens: Optional[int]

Number of output tokens

num_pages_extracted: Optional[int]

Number of pages extracted

class ExtractV2Job:

An extraction job.

id: str

Unique job identifier (job_id)

created_at: datetime

Creation timestamp

formatdate-time
document_input_value: str

File ID or parse job ID that was extracted

project_id: str

Project this job belongs to

status: str

Current job status.

  • PENDING — queued, not yet started
  • RUNNING — actively processing
  • COMPLETED — finished successfully
  • FAILED — terminated with an error
  • CANCELLED — cancelled by user
updated_at: datetime

Last update timestamp

formatdate-time
configuration: Optional[ExtractConfiguration]

Extract configuration combining parse and extract settings.

data_schema: Dict[str, Union[Dict[str, object], List[object], str, 3 more]]

JSON Schema defining the fields to extract. Validate with the /schema/validate endpoint first.

Accepts one of the following:
Dict[str, object]
List[object]
str
float
bool
cite_sources: Optional[bool]

Include citations in results

confidence_scores: Optional[bool]

Include confidence scores in results

extract_version: Optional[str]

Extract algorithm version. Use 'latest' or a date string.

extraction_target: Optional[Literal["per_doc", "per_page", "per_table_row"]]

Granularity of extraction: per_doc returns one object per document, per_page returns one object per page, per_table_row returns one object per table row

Accepts one of the following:
"per_doc"
"per_page"
"per_table_row"
lang: Optional[str]

ISO 639-1 language code for the document

max_pages: Optional[int]

Maximum number of pages to process. Omit for no limit.

minimum1
parse_config_id: Optional[str]

Saved parse configuration ID to control how the document is parsed before extraction

parse_tier: Optional[str]

Parse tier to use before extraction (fast, cost_effective, or agentic)

system_prompt: Optional[str]

Custom system prompt to guide extraction behavior

target_pages: Optional[str]

Comma-separated page numbers or ranges to process (1-based). Omit to process all pages.

tier: Optional[Literal["cost_effective", "agentic"]]

Extract tier: cost_effective (5 credits/page) or agentic (15 credits/page)

Accepts one of the following:
"cost_effective"
"agentic"
configuration_id: Optional[str]

Saved extract configuration ID used for this job, if any

error_message: Optional[str]

Error details when status is FAILED

extract_metadata: Optional[ExtractJobMetadata]

Extraction metadata.

field_metadata: Optional[ExtractedFieldMetadata]

Metadata for extracted fields including document, page, and row level info.

document_metadata: Optional[Dict[str, Union[Dict[str, object], List[object], str, 3 more]]]

Document-level metadata (citations, confidence) keyed by field name

Accepts one of the following:
Dict[str, object]
List[object]
str
float
bool
page_metadata: Optional[List[Dict[str, Union[Dict[str, object], List[object], str, 3 more]]]]

Per-page metadata when extraction_target is per_page

Accepts one of the following:
Dict[str, object]
List[object]
str
float
bool
row_metadata: Optional[List[Dict[str, Union[Dict[str, object], List[object], str, 3 more]]]]

Per-row metadata when extraction_target is per_table_row

Accepts one of the following:
Dict[str, object]
List[object]
str
float
bool
parse_job_id: Optional[str]

Reference to the ParseJob ID used for parsing

parse_tier: Optional[str]

Parse tier used for parsing the document

extract_result: Optional[Union[Dict[str, Union[Dict[str, object], List[object], str, 3 more]], List[Dict[str, Union[Dict[str, object], List[object], str, 3 more]]], null]]

Extracted data conforming to the data_schema. Returns a single object for per_doc, or an array for per_page / per_table_row.

Accepts one of the following:
Dict[str, Union[Dict[str, object], List[object], str, 3 more]]
Accepts one of the following:
Dict[str, object]
List[object]
str
float
bool
List[Dict[str, Union[Dict[str, object], List[object], str, 3 more]]]
Accepts one of the following:
Dict[str, object]
List[object]
str
float
bool
metadata: Optional[Metadata]

Job-level metadata.

usage: Optional[ExtractJobUsage]

Extraction usage metrics.

num_document_tokens: Optional[int]

Number of document tokens

num_output_tokens: Optional[int]

Number of output tokens

num_pages_extracted: Optional[int]

Number of pages extracted

class ExtractV2JobCreate:

Request to create an extraction job. Provide configuration_id or inline configuration.

document_input_value: str

File ID or Parse Job ID to extract from

maxLength200
configuration: Optional[ExtractConfiguration]

Extract configuration combining parse and extract settings.

data_schema: Dict[str, Union[Dict[str, object], List[object], str, 3 more]]

JSON Schema defining the fields to extract. Validate with the /schema/validate endpoint first.

Accepts one of the following:
Dict[str, object]
List[object]
str
float
bool
cite_sources: Optional[bool]

Include citations in results

confidence_scores: Optional[bool]

Include confidence scores in results

extract_version: Optional[str]

Extract algorithm version. Use 'latest' or a date string.

extraction_target: Optional[Literal["per_doc", "per_page", "per_table_row"]]

Granularity of extraction: per_doc returns one object per document, per_page returns one object per page, per_table_row returns one object per table row

Accepts one of the following:
"per_doc"
"per_page"
"per_table_row"
lang: Optional[str]

ISO 639-1 language code for the document

max_pages: Optional[int]

Maximum number of pages to process. Omit for no limit.

minimum1
parse_config_id: Optional[str]

Saved parse configuration ID to control how the document is parsed before extraction

parse_tier: Optional[str]

Parse tier to use before extraction (fast, cost_effective, or agentic)

system_prompt: Optional[str]

Custom system prompt to guide extraction behavior

target_pages: Optional[str]

Comma-separated page numbers or ranges to process (1-based). Omit to process all pages.

tier: Optional[Literal["cost_effective", "agentic"]]

Extract tier: cost_effective (5 credits/page) or agentic (15 credits/page)

Accepts one of the following:
"cost_effective"
"agentic"
configuration_id: Optional[str]

Saved extract configuration ID (mutually exclusive with configuration)

webhook_configurations: Optional[List[WebhookConfiguration]]

Outbound webhook endpoints to notify on job status changes

webhook_events: Optional[List[Literal["extract.pending", "extract.success", "extract.error", 14 more]]]

Events to subscribe to (e.g. 'parse.success', 'extract.error'). If null, all events are delivered.

Accepts one of the following:
"extract.pending"
"extract.success"
"extract.error"
"extract.partial_success"
"extract.cancelled"
"parse.pending"
"parse.running"
"parse.success"
"parse.error"
"parse.partial_success"
"parse.cancelled"
"classify.pending"
"classify.success"
"classify.error"
"classify.partial_success"
"classify.cancelled"
"unmapped_event"
webhook_headers: Optional[Dict[str, str]]

Custom HTTP headers sent with each webhook request (e.g. auth tokens)

webhook_output_format: Optional[str]

Response format sent to the webhook: 'string' (default) or 'json'

webhook_url: Optional[str]

URL to receive webhook POST notifications

class ExtractV2JobQueryResponse:

Paginated list of extraction jobs.

items: List[ExtractV2Job]

The list of items.

id: str

Unique job identifier (job_id)

created_at: datetime

Creation timestamp

formatdate-time
document_input_value: str

File ID or parse job ID that was extracted

project_id: str

Project this job belongs to

status: str

Current job status.

  • PENDING — queued, not yet started
  • RUNNING — actively processing
  • COMPLETED — finished successfully
  • FAILED — terminated with an error
  • CANCELLED — cancelled by user
updated_at: datetime

Last update timestamp

formatdate-time
configuration: Optional[ExtractConfiguration]

Extract configuration combining parse and extract settings.

data_schema: Dict[str, Union[Dict[str, object], List[object], str, 3 more]]

JSON Schema defining the fields to extract. Validate with the /schema/validate endpoint first.

Accepts one of the following:
Dict[str, object]
List[object]
str
float
bool
cite_sources: Optional[bool]

Include citations in results

confidence_scores: Optional[bool]

Include confidence scores in results

extract_version: Optional[str]

Extract algorithm version. Use 'latest' or a date string.

extraction_target: Optional[Literal["per_doc", "per_page", "per_table_row"]]

Granularity of extraction: per_doc returns one object per document, per_page returns one object per page, per_table_row returns one object per table row

Accepts one of the following:
"per_doc"
"per_page"
"per_table_row"
lang: Optional[str]

ISO 639-1 language code for the document

max_pages: Optional[int]

Maximum number of pages to process. Omit for no limit.

minimum1
parse_config_id: Optional[str]

Saved parse configuration ID to control how the document is parsed before extraction

parse_tier: Optional[str]

Parse tier to use before extraction (fast, cost_effective, or agentic)

system_prompt: Optional[str]

Custom system prompt to guide extraction behavior

target_pages: Optional[str]

Comma-separated page numbers or ranges to process (1-based). Omit to process all pages.

tier: Optional[Literal["cost_effective", "agentic"]]

Extract tier: cost_effective (5 credits/page) or agentic (15 credits/page)

Accepts one of the following:
"cost_effective"
"agentic"
configuration_id: Optional[str]

Saved extract configuration ID used for this job, if any

error_message: Optional[str]

Error details when status is FAILED

extract_metadata: Optional[ExtractJobMetadata]

Extraction metadata.

field_metadata: Optional[ExtractedFieldMetadata]

Metadata for extracted fields including document, page, and row level info.

document_metadata: Optional[Dict[str, Union[Dict[str, object], List[object], str, 3 more]]]

Document-level metadata (citations, confidence) keyed by field name

Accepts one of the following:
Dict[str, object]
List[object]
str
float
bool
page_metadata: Optional[List[Dict[str, Union[Dict[str, object], List[object], str, 3 more]]]]

Per-page metadata when extraction_target is per_page

Accepts one of the following:
Dict[str, object]
List[object]
str
float
bool
row_metadata: Optional[List[Dict[str, Union[Dict[str, object], List[object], str, 3 more]]]]

Per-row metadata when extraction_target is per_table_row

Accepts one of the following:
Dict[str, object]
List[object]
str
float
bool
parse_job_id: Optional[str]

Reference to the ParseJob ID used for parsing

parse_tier: Optional[str]

Parse tier used for parsing the document

extract_result: Optional[Union[Dict[str, Union[Dict[str, object], List[object], str, 3 more]], List[Dict[str, Union[Dict[str, object], List[object], str, 3 more]]], null]]

Extracted data conforming to the data_schema. Returns a single object for per_doc, or an array for per_page / per_table_row.

Accepts one of the following:
Dict[str, Union[Dict[str, object], List[object], str, 3 more]]
Accepts one of the following:
Dict[str, object]
List[object]
str
float
bool
List[Dict[str, Union[Dict[str, object], List[object], str, 3 more]]]
Accepts one of the following:
Dict[str, object]
List[object]
str
float
bool
metadata: Optional[Metadata]

Job-level metadata.

usage: Optional[ExtractJobUsage]

Extraction usage metrics.

num_document_tokens: Optional[int]

Number of document tokens

num_output_tokens: Optional[int]

Number of output tokens

num_pages_extracted: Optional[int]

Number of pages extracted

next_page_token: Optional[str]

A token, which can be sent as page_token to retrieve the next page. If this field is omitted, there are no subsequent pages.

total_size: Optional[int]

The total number of items available. This is only populated when specifically requested. The value may be an estimate and can be used for display purposes only.

class ExtractV2SchemaGenerateRequest:

Request schema for generating an extraction schema.

data_schema: Optional[Dict[str, Union[Dict[str, object], List[object], str, 3 more]]]

Optional schema to validate, refine, or extend

Accepts one of the following:
Dict[str, object]
List[object]
str
float
bool
file_id: Optional[str]

Optional file ID to analyze for schema generation

name: Optional[str]

Name for the generated configuration (auto-generated if omitted)

maxLength255
prompt: Optional[str]

Natural language description of the data structure to extract

class ExtractV2SchemaValidateRequest:

Request schema for validating an extraction schema.

data_schema: Dict[str, Union[Dict[str, object], List[object], str, 3 more]]

JSON Schema to validate for use with extract jobs

Accepts one of the following:
Dict[str, object]
List[object]
str
float
bool
class ExtractV2SchemaValidateResponse:

Response schema for schema validation.

data_schema: Dict[str, Union[Dict[str, object], List[object], str, 3 more]]

Validated JSON Schema, ready for use in extract jobs

Accepts one of the following:
Dict[str, object]
List[object]
str
float
bool
class ExtractedFieldMetadata:

Metadata for extracted fields including document, page, and row level info.

document_metadata: Optional[Dict[str, Union[Dict[str, object], List[object], str, 3 more]]]

Document-level metadata (citations, confidence) keyed by field name

Accepts one of the following:
Dict[str, object]
List[object]
str
float
bool
page_metadata: Optional[List[Dict[str, Union[Dict[str, object], List[object], str, 3 more]]]]

Per-page metadata when extraction_target is per_page

Accepts one of the following:
Dict[str, object]
List[object]
str
float
bool
row_metadata: Optional[List[Dict[str, Union[Dict[str, object], List[object], str, 3 more]]]]

Per-row metadata when extraction_target is per_table_row

Accepts one of the following:
Dict[str, object]
List[object]
str
float
bool