Extract
List Extract Jobs
Get Extract Job
Delete Extract Job
Validate Extraction Schema
Generate Extraction Schema
ModelsExpand Collapse
class ExtractConfiguration: …
Extract configuration combining parse and extract settings.
data_schema: Dict[str, Union[Dict[str, object], List[object], str, 3 more]]
JSON Schema defining the fields to extract. Validate with the /schema/validate endpoint first.
cite_sources: Optional[bool]
Include citations in results
confidence_scores: Optional[bool]
Include confidence scores in results
extract_version: Optional[str]
Extract algorithm version. Use 'latest' or a date string.
extraction_target: Optional[Literal["per_doc", "per_page", "per_table_row"]]
Granularity of extraction: per_doc returns one object per document, per_page returns one object per page, per_table_row returns one object per table row
lang: Optional[str]
ISO 639-1 language code for the document
max_pages: Optional[int]
Maximum number of pages to process. Omit for no limit.
parse_config_id: Optional[str]
Saved parse configuration ID to control how the document is parsed before extraction
parse_tier: Optional[str]
Parse tier to use before extraction (fast, cost_effective, or agentic)
system_prompt: Optional[str]
Custom system prompt to guide extraction behavior
target_pages: Optional[str]
Comma-separated page numbers or ranges to process (1-based). Omit to process all pages.
tier: Optional[Literal["cost_effective", "agentic"]]
Extract tier: cost_effective (5 credits/page) or agentic (15 credits/page)
class ExtractJobMetadata: …
Extraction metadata.
field_metadata: Optional[ExtractedFieldMetadata]
Metadata for extracted fields including document, page, and row level info.
document_metadata: Optional[Dict[str, Union[Dict[str, object], List[object], str, 3 more]]]
Document-level metadata (citations, confidence) keyed by field name
page_metadata: Optional[List[Dict[str, Union[Dict[str, object], List[object], str, 3 more]]]]
Per-page metadata when extraction_target is per_page
row_metadata: Optional[List[Dict[str, Union[Dict[str, object], List[object], str, 3 more]]]]
Per-row metadata when extraction_target is per_table_row
parse_job_id: Optional[str]
Reference to the ParseJob ID used for parsing
parse_tier: Optional[str]
Parse tier used for parsing the document
class ExtractJobUsage: …
Extraction usage metrics.
num_document_tokens: Optional[int]
Number of document tokens
num_output_tokens: Optional[int]
Number of output tokens
num_pages_extracted: Optional[int]
Number of pages extracted
class ExtractV2Job: …
An extraction job.
id: str
Unique job identifier (job_id)
created_at: datetime
Creation timestamp
document_input_value: str
File ID or parse job ID that was extracted
project_id: str
Project this job belongs to
status: str
Current job status.
PENDING— queued, not yet startedRUNNING— actively processingCOMPLETED— finished successfullyFAILED— terminated with an errorCANCELLED— cancelled by user
updated_at: datetime
Last update timestamp
configuration: Optional[ExtractConfiguration]
Extract configuration combining parse and extract settings.
data_schema: Dict[str, Union[Dict[str, object], List[object], str, 3 more]]
JSON Schema defining the fields to extract. Validate with the /schema/validate endpoint first.
cite_sources: Optional[bool]
Include citations in results
confidence_scores: Optional[bool]
Include confidence scores in results
extract_version: Optional[str]
Extract algorithm version. Use 'latest' or a date string.
extraction_target: Optional[Literal["per_doc", "per_page", "per_table_row"]]
Granularity of extraction: per_doc returns one object per document, per_page returns one object per page, per_table_row returns one object per table row
lang: Optional[str]
ISO 639-1 language code for the document
max_pages: Optional[int]
Maximum number of pages to process. Omit for no limit.
parse_config_id: Optional[str]
Saved parse configuration ID to control how the document is parsed before extraction
parse_tier: Optional[str]
Parse tier to use before extraction (fast, cost_effective, or agentic)
system_prompt: Optional[str]
Custom system prompt to guide extraction behavior
target_pages: Optional[str]
Comma-separated page numbers or ranges to process (1-based). Omit to process all pages.
tier: Optional[Literal["cost_effective", "agentic"]]
Extract tier: cost_effective (5 credits/page) or agentic (15 credits/page)
configuration_id: Optional[str]
Saved extract configuration ID used for this job, if any
error_message: Optional[str]
Error details when status is FAILED
extract_metadata: Optional[ExtractJobMetadata]
Extraction metadata.
field_metadata: Optional[ExtractedFieldMetadata]
Metadata for extracted fields including document, page, and row level info.
document_metadata: Optional[Dict[str, Union[Dict[str, object], List[object], str, 3 more]]]
Document-level metadata (citations, confidence) keyed by field name
page_metadata: Optional[List[Dict[str, Union[Dict[str, object], List[object], str, 3 more]]]]
Per-page metadata when extraction_target is per_page
row_metadata: Optional[List[Dict[str, Union[Dict[str, object], List[object], str, 3 more]]]]
Per-row metadata when extraction_target is per_table_row
parse_job_id: Optional[str]
Reference to the ParseJob ID used for parsing
parse_tier: Optional[str]
Parse tier used for parsing the document
extract_result: Optional[Union[Dict[str, Union[Dict[str, object], List[object], str, 3 more]], List[Dict[str, Union[Dict[str, object], List[object], str, 3 more]]], null]]
Extracted data conforming to the data_schema. Returns a single object for per_doc, or an array for per_page / per_table_row.
Dict[str, Union[Dict[str, object], List[object], str, 3 more]]
List[Dict[str, Union[Dict[str, object], List[object], str, 3 more]]]
metadata: Optional[Metadata]
Job-level metadata.
usage: Optional[ExtractJobUsage]
Extraction usage metrics.
num_document_tokens: Optional[int]
Number of document tokens
num_output_tokens: Optional[int]
Number of output tokens
num_pages_extracted: Optional[int]
Number of pages extracted
class ExtractV2JobCreate: …
Request to create an extraction job. Provide configuration_id or inline configuration.
document_input_value: str
File ID or Parse Job ID to extract from
configuration: Optional[ExtractConfiguration]
Extract configuration combining parse and extract settings.
data_schema: Dict[str, Union[Dict[str, object], List[object], str, 3 more]]
JSON Schema defining the fields to extract. Validate with the /schema/validate endpoint first.
cite_sources: Optional[bool]
Include citations in results
confidence_scores: Optional[bool]
Include confidence scores in results
extract_version: Optional[str]
Extract algorithm version. Use 'latest' or a date string.
extraction_target: Optional[Literal["per_doc", "per_page", "per_table_row"]]
Granularity of extraction: per_doc returns one object per document, per_page returns one object per page, per_table_row returns one object per table row
lang: Optional[str]
ISO 639-1 language code for the document
max_pages: Optional[int]
Maximum number of pages to process. Omit for no limit.
parse_config_id: Optional[str]
Saved parse configuration ID to control how the document is parsed before extraction
parse_tier: Optional[str]
Parse tier to use before extraction (fast, cost_effective, or agentic)
system_prompt: Optional[str]
Custom system prompt to guide extraction behavior
target_pages: Optional[str]
Comma-separated page numbers or ranges to process (1-based). Omit to process all pages.
tier: Optional[Literal["cost_effective", "agentic"]]
Extract tier: cost_effective (5 credits/page) or agentic (15 credits/page)
configuration_id: Optional[str]
Saved extract configuration ID (mutually exclusive with configuration)
webhook_configurations: Optional[List[WebhookConfiguration]]
Outbound webhook endpoints to notify on job status changes
webhook_events: Optional[List[Literal["extract.pending", "extract.success", "extract.error", 14 more]]]
Events to subscribe to (e.g. 'parse.success', 'extract.error'). If null, all events are delivered.
webhook_headers: Optional[Dict[str, str]]
Custom HTTP headers sent with each webhook request (e.g. auth tokens)
webhook_output_format: Optional[str]
Response format sent to the webhook: 'string' (default) or 'json'
webhook_url: Optional[str]
URL to receive webhook POST notifications
class ExtractV2JobQueryResponse: …
Paginated list of extraction jobs.
The list of items.
id: str
Unique job identifier (job_id)
created_at: datetime
Creation timestamp
document_input_value: str
File ID or parse job ID that was extracted
project_id: str
Project this job belongs to
status: str
Current job status.
PENDING— queued, not yet startedRUNNING— actively processingCOMPLETED— finished successfullyFAILED— terminated with an errorCANCELLED— cancelled by user
updated_at: datetime
Last update timestamp
configuration: Optional[ExtractConfiguration]
Extract configuration combining parse and extract settings.
data_schema: Dict[str, Union[Dict[str, object], List[object], str, 3 more]]
JSON Schema defining the fields to extract. Validate with the /schema/validate endpoint first.
cite_sources: Optional[bool]
Include citations in results
confidence_scores: Optional[bool]
Include confidence scores in results
extract_version: Optional[str]
Extract algorithm version. Use 'latest' or a date string.
extraction_target: Optional[Literal["per_doc", "per_page", "per_table_row"]]
Granularity of extraction: per_doc returns one object per document, per_page returns one object per page, per_table_row returns one object per table row
lang: Optional[str]
ISO 639-1 language code for the document
max_pages: Optional[int]
Maximum number of pages to process. Omit for no limit.
parse_config_id: Optional[str]
Saved parse configuration ID to control how the document is parsed before extraction
parse_tier: Optional[str]
Parse tier to use before extraction (fast, cost_effective, or agentic)
system_prompt: Optional[str]
Custom system prompt to guide extraction behavior
target_pages: Optional[str]
Comma-separated page numbers or ranges to process (1-based). Omit to process all pages.
tier: Optional[Literal["cost_effective", "agentic"]]
Extract tier: cost_effective (5 credits/page) or agentic (15 credits/page)
configuration_id: Optional[str]
Saved extract configuration ID used for this job, if any
error_message: Optional[str]
Error details when status is FAILED
extract_metadata: Optional[ExtractJobMetadata]
Extraction metadata.
field_metadata: Optional[ExtractedFieldMetadata]
Metadata for extracted fields including document, page, and row level info.
document_metadata: Optional[Dict[str, Union[Dict[str, object], List[object], str, 3 more]]]
Document-level metadata (citations, confidence) keyed by field name
page_metadata: Optional[List[Dict[str, Union[Dict[str, object], List[object], str, 3 more]]]]
Per-page metadata when extraction_target is per_page
row_metadata: Optional[List[Dict[str, Union[Dict[str, object], List[object], str, 3 more]]]]
Per-row metadata when extraction_target is per_table_row
parse_job_id: Optional[str]
Reference to the ParseJob ID used for parsing
parse_tier: Optional[str]
Parse tier used for parsing the document
extract_result: Optional[Union[Dict[str, Union[Dict[str, object], List[object], str, 3 more]], List[Dict[str, Union[Dict[str, object], List[object], str, 3 more]]], null]]
Extracted data conforming to the data_schema. Returns a single object for per_doc, or an array for per_page / per_table_row.
Dict[str, Union[Dict[str, object], List[object], str, 3 more]]
List[Dict[str, Union[Dict[str, object], List[object], str, 3 more]]]
metadata: Optional[Metadata]
Job-level metadata.
usage: Optional[ExtractJobUsage]
Extraction usage metrics.
num_document_tokens: Optional[int]
Number of document tokens
num_output_tokens: Optional[int]
Number of output tokens
num_pages_extracted: Optional[int]
Number of pages extracted
next_page_token: Optional[str]
A token, which can be sent as page_token to retrieve the next page. If this field is omitted, there are no subsequent pages.
total_size: Optional[int]
The total number of items available. This is only populated when specifically requested. The value may be an estimate and can be used for display purposes only.
class ExtractV2SchemaGenerateRequest: …
Request schema for generating an extraction schema.
data_schema: Optional[Dict[str, Union[Dict[str, object], List[object], str, 3 more]]]
Optional schema to validate, refine, or extend
file_id: Optional[str]
Optional file ID to analyze for schema generation
name: Optional[str]
Name for the generated configuration (auto-generated if omitted)
prompt: Optional[str]
Natural language description of the data structure to extract
class ExtractV2SchemaValidateRequest: …
Request schema for validating an extraction schema.
data_schema: Dict[str, Union[Dict[str, object], List[object], str, 3 more]]
JSON Schema to validate for use with extract jobs
class ExtractV2SchemaValidateResponse: …
Response schema for schema validation.
data_schema: Dict[str, Union[Dict[str, object], List[object], str, 3 more]]
Validated JSON Schema, ready for use in extract jobs
class ExtractedFieldMetadata: …
Metadata for extracted fields including document, page, and row level info.
document_metadata: Optional[Dict[str, Union[Dict[str, object], List[object], str, 3 more]]]
Document-level metadata (citations, confidence) keyed by field name
page_metadata: Optional[List[Dict[str, Union[Dict[str, object], List[object], str, 3 more]]]]
Per-page metadata when extraction_target is per_page
row_metadata: Optional[List[Dict[str, Union[Dict[str, object], List[object], str, 3 more]]]]
Per-row metadata when extraction_target is per_table_row