Skip to content

List Extract Jobs

client.extract.list(ExtractListParams { configuration_id, created_at_on_or_after, created_at_on_or_before, 9 more } query?, RequestOptionsoptions?): PaginatedCursor<ExtractV2Job { id, created_at, document_input_value, 9 more } >
GET/api/v2/extract

List extraction jobs with optional filtering and pagination.

Filter by configuration_id, status, document_input_value, or creation date range. Results are returned newest-first. Use expand=configuration to include the full configuration used, and expand=extract_metadata for per-field metadata.

ParametersExpand Collapse
query: ExtractListParams { configuration_id, created_at_on_or_after, created_at_on_or_before, 9 more }
configuration_id?: string | null

Filter by configuration ID

created_at_on_or_after?: string | null

Include jobs created at or after this timestamp (inclusive)

formatdate-time
created_at_on_or_before?: string | null

Include jobs created at or before this timestamp (inclusive)

formatdate-time
document_input_type?: string | null

Filter by document input type (file_id or parse_job_id)

document_input_value?: string | null

Filter by document input value

expand?: Array<string>

Additional fields to include: configuration, extract_metadata

job_ids?: Array<string> | null

Filter by specific job IDs

organization_id?: string | null
page_size?: number | null

Number of items per page

page_token?: string | null

Token for pagination

project_id?: string | null
status?: "PENDING" | "THROTTLED" | "RUNNING" | 3 more | null

Filter by status

Accepts one of the following:
"PENDING"
"THROTTLED"
"RUNNING"
"COMPLETED"
"FAILED"
"CANCELLED"
ReturnsExpand Collapse
ExtractV2Job { id, created_at, document_input_value, 9 more }

An extraction job.

id: string

Unique job identifier (job_id)

created_at: string

Creation timestamp

formatdate-time
document_input_value: string

File ID or parse job ID that was extracted

project_id: string

Project this job belongs to

status: string

Current job status.

  • PENDING — queued, not yet started
  • RUNNING — actively processing
  • COMPLETED — finished successfully
  • FAILED — terminated with an error
  • CANCELLED — cancelled by user
updated_at: string

Last update timestamp

formatdate-time
configuration?: ExtractConfiguration { data_schema, cite_sources, confidence_scores, 9 more } | null

Extract configuration combining parse and extract settings.

data_schema: Record<string, Record<string, unknown> | Array<unknown> | string | 2 more | null>

JSON Schema defining the fields to extract. Validate with the /schema/validate endpoint first.

Accepts one of the following:
Record<string, unknown>
Array<unknown>
string
number
boolean
cite_sources?: boolean

Include citations in results

confidence_scores?: boolean

Include confidence scores in results

extract_version?: string

Extract algorithm version. Use 'latest' or a date string.

extraction_target?: "per_doc" | "per_page" | "per_table_row"

Granularity of extraction: per_doc returns one object per document, per_page returns one object per page, per_table_row returns one object per table row

Accepts one of the following:
"per_doc"
"per_page"
"per_table_row"
lang?: string

ISO 639-1 language code for the document

max_pages?: number | null

Maximum number of pages to process. Omit for no limit.

minimum1
parse_config_id?: string | null

Saved parse configuration ID to control how the document is parsed before extraction

parse_tier?: string | null

Parse tier to use before extraction (fast, cost_effective, or agentic)

system_prompt?: string | null

Custom system prompt to guide extraction behavior

target_pages?: string | null

Comma-separated page numbers or ranges to process (1-based). Omit to process all pages.

tier?: "cost_effective" | "agentic"

Extract tier: cost_effective (5 credits/page) or agentic (15 credits/page)

Accepts one of the following:
"cost_effective"
"agentic"
configuration_id?: string | null

Saved extract configuration ID used for this job, if any

error_message?: string | null

Error details when status is FAILED

extract_metadata?: ExtractJobMetadata { field_metadata, parse_job_id, parse_tier } | null

Extraction metadata.

field_metadata?: ExtractedFieldMetadata { document_metadata, page_metadata, row_metadata } | null

Metadata for extracted fields including document, page, and row level info.

document_metadata?: Record<string, Record<string, unknown> | Array<unknown> | string | 2 more | null> | null

Document-level metadata (citations, confidence) keyed by field name

Accepts one of the following:
Record<string, unknown>
Array<unknown>
string
number
boolean
page_metadata?: Array<Record<string, Record<string, unknown> | Array<unknown> | string | 2 more | null>> | null

Per-page metadata when extraction_target is per_page

Accepts one of the following:
Record<string, unknown>
Array<unknown>
string
number
boolean
row_metadata?: Array<Record<string, Record<string, unknown> | Array<unknown> | string | 2 more | null>> | null

Per-row metadata when extraction_target is per_table_row

Accepts one of the following:
Record<string, unknown>
Array<unknown>
string
number
boolean
parse_job_id?: string | null

Reference to the ParseJob ID used for parsing

parse_tier?: string | null

Parse tier used for parsing the document

extract_result?: Record<string, Record<string, unknown> | Array<unknown> | string | 2 more | null> | Array<Record<string, Record<string, unknown> | Array<unknown> | string | 2 more | null>> | null

Extracted data conforming to the data_schema. Returns a single object for per_doc, or an array for per_page / per_table_row.

Accepts one of the following:
Record<string, Record<string, unknown> | Array<unknown> | string | 2 more | null>
Record<string, unknown>
Array<unknown>
string
number
boolean
Array<Record<string, Record<string, unknown> | Array<unknown> | string | 2 more | null>>
Record<string, unknown>
Array<unknown>
string
number
boolean
metadata?: Metadata | null

Job-level metadata.

usage?: ExtractJobUsage { num_document_tokens, num_output_tokens, num_pages_extracted } | null

Extraction usage metrics.

num_document_tokens?: number | null

Number of document tokens

num_output_tokens?: number | null

Number of output tokens

num_pages_extracted?: number | null

Number of pages extracted

List Extract Jobs

import LlamaCloud from '@llamaindex/llama-cloud';

const client = new LlamaCloud({
  apiKey: process.env['LLAMA_CLOUD_API_KEY'], // This is the default and can be omitted
});

// Automatically fetches more pages as needed.
for await (const extractV2Job of client.extract.list()) {
  console.log(extractV2Job.id);
}
{
  "items": [
    {
      "id": "ext-aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee",
      "created_at": "2019-12-27T18:11:19.117Z",
      "document_input_value": "dfl-aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee",
      "project_id": "prj-aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee",
      "status": "COMPLETED",
      "updated_at": "2019-12-27T18:11:19.117Z",
      "configuration": {
        "data_schema": {
          "foo": {
            "foo": "bar"
          }
        },
        "cite_sources": true,
        "confidence_scores": true,
        "extract_version": "latest",
        "extraction_target": "per_doc",
        "lang": "en",
        "max_pages": 10,
        "parse_config_id": "cfg-11111111-2222-3333-4444-555555555555",
        "parse_tier": "fast",
        "system_prompt": "Extract all monetary values in USD. If a currency is not specified, assume USD.",
        "target_pages": "1,3,5-7",
        "tier": "cost_effective"
      },
      "configuration_id": "cfg-11111111-2222-3333-4444-555555555555",
      "error_message": "error_message",
      "extract_metadata": {
        "field_metadata": {
          "document_metadata": {
            "foo": {
              "foo": "bar"
            }
          },
          "page_metadata": [
            {
              "foo": {
                "foo": "bar"
              }
            }
          ],
          "row_metadata": [
            {
              "foo": {
                "foo": "bar"
              }
            }
          ]
        },
        "parse_job_id": "parse_job_id",
        "parse_tier": "parse_tier"
      },
      "extract_result": {
        "foo": {
          "foo": "bar"
        }
      },
      "metadata": {
        "usage": {
          "num_document_tokens": 0,
          "num_output_tokens": 0,
          "num_pages_extracted": 0
        }
      }
    }
  ],
  "next_page_token": "next_page_token",
  "total_size": 0
}
Returns Examples
{
  "items": [
    {
      "id": "ext-aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee",
      "created_at": "2019-12-27T18:11:19.117Z",
      "document_input_value": "dfl-aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee",
      "project_id": "prj-aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee",
      "status": "COMPLETED",
      "updated_at": "2019-12-27T18:11:19.117Z",
      "configuration": {
        "data_schema": {
          "foo": {
            "foo": "bar"
          }
        },
        "cite_sources": true,
        "confidence_scores": true,
        "extract_version": "latest",
        "extraction_target": "per_doc",
        "lang": "en",
        "max_pages": 10,
        "parse_config_id": "cfg-11111111-2222-3333-4444-555555555555",
        "parse_tier": "fast",
        "system_prompt": "Extract all monetary values in USD. If a currency is not specified, assume USD.",
        "target_pages": "1,3,5-7",
        "tier": "cost_effective"
      },
      "configuration_id": "cfg-11111111-2222-3333-4444-555555555555",
      "error_message": "error_message",
      "extract_metadata": {
        "field_metadata": {
          "document_metadata": {
            "foo": {
              "foo": "bar"
            }
          },
          "page_metadata": [
            {
              "foo": {
                "foo": "bar"
              }
            }
          ],
          "row_metadata": [
            {
              "foo": {
                "foo": "bar"
              }
            }
          ]
        },
        "parse_job_id": "parse_job_id",
        "parse_tier": "parse_tier"
      },
      "extract_result": {
        "foo": {
          "foo": "bar"
        }
      },
      "metadata": {
        "usage": {
          "num_document_tokens": 0,
          "num_output_tokens": 0,
          "num_pages_extracted": 0
        }
      }
    }
  ],
  "next_page_token": "next_page_token",
  "total_size": 0
}