Skip to content

Create Extract Job

client.extract.create(ExtractCreateParams { document_input_value, organization_id, project_id, 3 more } params, RequestOptionsoptions?): ExtractV2Job { id, created_at, document_input_value, 9 more }
POST/api/v2/extract

Create an extraction job.

Extracts structured data from a document using either a saved configuration or an inline JSON Schema.

Input

Provide exactly one of:

  • configuration_id — reference a saved extraction config
  • configuration — inline configuration with a data_schema

Document input

Set document_input_value to a file ID (dfl-...) or a completed parse job ID (pjb-...).

The job runs asynchronously. Poll GET /extract/{job_id} or register a webhook to monitor completion.

ParametersExpand Collapse
params: ExtractCreateParams { document_input_value, organization_id, project_id, 3 more }
document_input_value: string

Body param: File ID or Parse Job ID to extract from

maxLength200
organization_id?: string | null

Query param

formatuuid
project_id?: string | null

Query param

formatuuid
configuration?: ExtractConfiguration { data_schema, cite_sources, confidence_scores, 9 more } | null

Body param: Extract configuration combining parse and extract settings.

data_schema: Record<string, Record<string, unknown> | Array<unknown> | string | 2 more | null>

JSON Schema defining the fields to extract. Validate with the /schema/validate endpoint first.

Accepts one of the following:
Record<string, unknown>
Array<unknown>
string
number
boolean
cite_sources?: boolean

Include citations in results

confidence_scores?: boolean

Include confidence scores in results

extract_version?: string

Extract algorithm version. Use 'latest' or a date string.

extraction_target?: "per_doc" | "per_page" | "per_table_row"

Granularity of extraction: per_doc returns one object per document, per_page returns one object per page, per_table_row returns one object per table row

Accepts one of the following:
"per_doc"
"per_page"
"per_table_row"
lang?: string

ISO 639-1 language code for the document

max_pages?: number | null

Maximum number of pages to process. Omit for no limit.

minimum1
parse_config_id?: string | null

Saved parse configuration ID to control how the document is parsed before extraction

parse_tier?: string | null

Parse tier to use before extraction (fast, cost_effective, or agentic)

system_prompt?: string | null

Custom system prompt to guide extraction behavior

target_pages?: string | null

Comma-separated page numbers or ranges to process (1-based). Omit to process all pages.

tier?: "cost_effective" | "agentic"

Extract tier: cost_effective (5 credits/page) or agentic (15 credits/page)

Accepts one of the following:
"cost_effective"
"agentic"
configuration_id?: string | null

Body param: Saved extract configuration ID (mutually exclusive with configuration)

webhook_configurations?: Array<WebhookConfiguration> | null

Body param: Outbound webhook endpoints to notify on job status changes

webhook_events?: Array<"extract.pending" | "extract.success" | "extract.error" | 14 more> | null

Events to subscribe to (e.g. 'parse.success', 'extract.error'). If null, all events are delivered.

Accepts one of the following:
"extract.pending"
"extract.success"
"extract.error"
"extract.partial_success"
"extract.cancelled"
"parse.pending"
"parse.running"
"parse.success"
"parse.error"
"parse.partial_success"
"parse.cancelled"
"classify.pending"
"classify.success"
"classify.error"
"classify.partial_success"
"classify.cancelled"
"unmapped_event"
webhook_headers?: Record<string, string> | null

Custom HTTP headers sent with each webhook request (e.g. auth tokens)

webhook_output_format?: string | null

Response format sent to the webhook: 'string' (default) or 'json'

webhook_url?: string | null

URL to receive webhook POST notifications

ReturnsExpand Collapse
ExtractV2Job { id, created_at, document_input_value, 9 more }

An extraction job.

id: string

Unique job identifier (job_id)

created_at: string

Creation timestamp

formatdate-time
document_input_value: string

File ID or parse job ID that was extracted

project_id: string

Project this job belongs to

status: string

Current job status.

  • PENDING — queued, not yet started
  • RUNNING — actively processing
  • COMPLETED — finished successfully
  • FAILED — terminated with an error
  • CANCELLED — cancelled by user
updated_at: string

Last update timestamp

formatdate-time
configuration?: ExtractConfiguration { data_schema, cite_sources, confidence_scores, 9 more } | null

Extract configuration combining parse and extract settings.

data_schema: Record<string, Record<string, unknown> | Array<unknown> | string | 2 more | null>

JSON Schema defining the fields to extract. Validate with the /schema/validate endpoint first.

Accepts one of the following:
Record<string, unknown>
Array<unknown>
string
number
boolean
cite_sources?: boolean

Include citations in results

confidence_scores?: boolean

Include confidence scores in results

extract_version?: string

Extract algorithm version. Use 'latest' or a date string.

extraction_target?: "per_doc" | "per_page" | "per_table_row"

Granularity of extraction: per_doc returns one object per document, per_page returns one object per page, per_table_row returns one object per table row

Accepts one of the following:
"per_doc"
"per_page"
"per_table_row"
lang?: string

ISO 639-1 language code for the document

max_pages?: number | null

Maximum number of pages to process. Omit for no limit.

minimum1
parse_config_id?: string | null

Saved parse configuration ID to control how the document is parsed before extraction

parse_tier?: string | null

Parse tier to use before extraction (fast, cost_effective, or agentic)

system_prompt?: string | null

Custom system prompt to guide extraction behavior

target_pages?: string | null

Comma-separated page numbers or ranges to process (1-based). Omit to process all pages.

tier?: "cost_effective" | "agentic"

Extract tier: cost_effective (5 credits/page) or agentic (15 credits/page)

Accepts one of the following:
"cost_effective"
"agentic"
configuration_id?: string | null

Saved extract configuration ID used for this job, if any

error_message?: string | null

Error details when status is FAILED

extract_metadata?: ExtractJobMetadata { field_metadata, parse_job_id, parse_tier } | null

Extraction metadata.

field_metadata?: ExtractedFieldMetadata { document_metadata, page_metadata, row_metadata } | null

Metadata for extracted fields including document, page, and row level info.

document_metadata?: Record<string, Record<string, unknown> | Array<unknown> | string | 2 more | null> | null

Document-level metadata (citations, confidence) keyed by field name

Accepts one of the following:
Record<string, unknown>
Array<unknown>
string
number
boolean
page_metadata?: Array<Record<string, Record<string, unknown> | Array<unknown> | string | 2 more | null>> | null

Per-page metadata when extraction_target is per_page

Accepts one of the following:
Record<string, unknown>
Array<unknown>
string
number
boolean
row_metadata?: Array<Record<string, Record<string, unknown> | Array<unknown> | string | 2 more | null>> | null

Per-row metadata when extraction_target is per_table_row

Accepts one of the following:
Record<string, unknown>
Array<unknown>
string
number
boolean
parse_job_id?: string | null

Reference to the ParseJob ID used for parsing

parse_tier?: string | null

Parse tier used for parsing the document

extract_result?: Record<string, Record<string, unknown> | Array<unknown> | string | 2 more | null> | Array<Record<string, Record<string, unknown> | Array<unknown> | string | 2 more | null>> | null

Extracted data conforming to the data_schema. Returns a single object for per_doc, or an array for per_page / per_table_row.

Accepts one of the following:
Record<string, Record<string, unknown> | Array<unknown> | string | 2 more | null>
Record<string, unknown>
Array<unknown>
string
number
boolean
Array<Record<string, Record<string, unknown> | Array<unknown> | string | 2 more | null>>
Record<string, unknown>
Array<unknown>
string
number
boolean
metadata?: Metadata | null

Job-level metadata.

usage?: ExtractJobUsage { num_document_tokens, num_output_tokens, num_pages_extracted } | null

Extraction usage metrics.

num_document_tokens?: number | null

Number of document tokens

num_output_tokens?: number | null

Number of output tokens

num_pages_extracted?: number | null

Number of pages extracted

Create Extract Job

import LlamaCloud from '@llamaindex/llama-cloud';

const client = new LlamaCloud({
  apiKey: process.env['LLAMA_CLOUD_API_KEY'], // This is the default and can be omitted
});

const extractV2Job = await client.extract.create({
  document_input_value: 'dfl-aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee',
});

console.log(extractV2Job.id);
{
  "id": "ext-aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee",
  "created_at": "2019-12-27T18:11:19.117Z",
  "document_input_value": "dfl-aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee",
  "project_id": "prj-aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee",
  "status": "COMPLETED",
  "updated_at": "2019-12-27T18:11:19.117Z",
  "configuration": {
    "data_schema": {
      "foo": {
        "foo": "bar"
      }
    },
    "cite_sources": true,
    "confidence_scores": true,
    "extract_version": "latest",
    "extraction_target": "per_doc",
    "lang": "en",
    "max_pages": 10,
    "parse_config_id": "cfg-11111111-2222-3333-4444-555555555555",
    "parse_tier": "fast",
    "system_prompt": "Extract all monetary values in USD. If a currency is not specified, assume USD.",
    "target_pages": "1,3,5-7",
    "tier": "cost_effective"
  },
  "configuration_id": "cfg-11111111-2222-3333-4444-555555555555",
  "error_message": "error_message",
  "extract_metadata": {
    "field_metadata": {
      "document_metadata": {
        "foo": {
          "foo": "bar"
        }
      },
      "page_metadata": [
        {
          "foo": {
            "foo": "bar"
          }
        }
      ],
      "row_metadata": [
        {
          "foo": {
            "foo": "bar"
          }
        }
      ]
    },
    "parse_job_id": "parse_job_id",
    "parse_tier": "parse_tier"
  },
  "extract_result": {
    "foo": {
      "foo": "bar"
    }
  },
  "metadata": {
    "usage": {
      "num_document_tokens": 0,
      "num_output_tokens": 0,
      "num_pages_extracted": 0
    }
  }
}
Returns Examples
{
  "id": "ext-aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee",
  "created_at": "2019-12-27T18:11:19.117Z",
  "document_input_value": "dfl-aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee",
  "project_id": "prj-aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee",
  "status": "COMPLETED",
  "updated_at": "2019-12-27T18:11:19.117Z",
  "configuration": {
    "data_schema": {
      "foo": {
        "foo": "bar"
      }
    },
    "cite_sources": true,
    "confidence_scores": true,
    "extract_version": "latest",
    "extraction_target": "per_doc",
    "lang": "en",
    "max_pages": 10,
    "parse_config_id": "cfg-11111111-2222-3333-4444-555555555555",
    "parse_tier": "fast",
    "system_prompt": "Extract all monetary values in USD. If a currency is not specified, assume USD.",
    "target_pages": "1,3,5-7",
    "tier": "cost_effective"
  },
  "configuration_id": "cfg-11111111-2222-3333-4444-555555555555",
  "error_message": "error_message",
  "extract_metadata": {
    "field_metadata": {
      "document_metadata": {
        "foo": {
          "foo": "bar"
        }
      },
      "page_metadata": [
        {
          "foo": {
            "foo": "bar"
          }
        }
      ],
      "row_metadata": [
        {
          "foo": {
            "foo": "bar"
          }
        }
      ]
    },
    "parse_job_id": "parse_job_id",
    "parse_tier": "parse_tier"
  },
  "extract_result": {
    "foo": {
      "foo": "bar"
    }
  },
  "metadata": {
    "usage": {
      "num_document_tokens": 0,
      "num_output_tokens": 0,
      "num_pages_extracted": 0
    }
  }
}