Monitor the status of Stream API operations

This is for:

Developer

When you send content to a Catalog source using the Stream API, the API’s initial successful response confirms that the data was received, but it doesn’t guarantee that all the items will be successfully processed and indexed.

While you can manually inspect an operation’s status in the Log Browser (platform-ca | platform-eu | platform-au), this guide describes the API-driven process for programmatically tracking your updates in automated workflows.

Monitoring the status of any stream operation is a two-step process that involves checking two distinct stages of the indexing pipeline:

  1. Verify that the entire batch of items was accepted.

  2. Verify that each item was successfully updated in the index.

The examples in this article use the following endpoint from the Coveo Logs API:

POST https://api.cloud.coveo.com/logs/v1/organizations/<ORGANIZATION_ID>?from=<START_TIME>&to=<END_TIME> HTTP/1.1

Where:

  • <ORGANIZATION_ID> is the unique identifier of your Coveo organization. To learn how to find the organization ID, see Find your organization ID.

  • <START_TIME> and <END_TIME> define the time range for the logs you want to retrieve. Use the W3C date and time format (for example, 2025-10-01T00:00:00Z).

Step 1: Verify batch acceptance

The first stage of verification is to confirm that the JSON payload containing your batch of items was successfully received and considered valid. This check applies to all types of stream operations, including both full and partial updates. Each operation is assigned a unique orderingId, which is returned in the response to your Stream API request.

To check the status of a batch operation

  1. Make a POST request to the Logs API with the following JSON body.

    {
      "tasks": [
        "STREAMING_EXTENSION" 1
      ],
      "operations": [
        "BATCH_FILE" 2
      ],
      "sourcesIds": [
        "<SOURCE_ID>" 3
      ],
      "results": [
        "COMPLETED", 4
        "WARNING"
      ]
    }

    This request filters for logs:

    1 Associated with the STREAMING_EXTENSION stage of the indexing pipeline.
    2 For the BATCH_FILE operation, which represents the submission and initial validation of a batch of items.
    3 Scoped to your specific source (<SOURCE_ID>).
    4 With a result status of either COMPLETED or WARNING.
  2. The response contains all batch operations that match the filter criteria within the specified time range. To track a specific operation, find the log entry where the meta.orderingid field matches the orderingId from your request to the Stream API.

    • If the result is COMPLETED, the batch was accepted successfully.

    • If the result is WARNING, some operations were invalid. The meta.error field provides details about the validation error to help you identify and correct the issue. For example:

      {
        "guid": "{GUID}",
        "id": "{DOCUMENT_ID}",
        "task": "STREAMING_EXTENSION",
        "operation": "BATCH_FILE",
        "result": "WARNING",
        "meta": {
          "orderingid": 1755399334464,
          "mode": "StreamChunk",
          "error": "The document AddOrUpdate was skipped due to the new document being '424.89 KB' over the limit of '3 MB'."
        }
      }

Step 2: Verify document indexing

After confirming that your batch was accepted, the second stage is to verify that each item was successfully applied and updated in the index. Even if a batch is valid, individual item updates can fail during this final stage. This can happen for several reasons, such as:

  • An item exceeded the maximum size limit after being resolved.

  • A partial update was sent for an item that doesn’t exist in the source.

  • An item is missing required metadata.

To check for indexing failures

  1. Make a POST request to the Logs API with the following JSON body:

    {
      "tasks": [
        "STREAMING_EXTENSION" 1
      ],
      "operations": [
        "UPDATE" 2
      ],
      "sourcesIds": [
        "<SOURCE_ID>" 3
      ],
      "results": [
        "WARNING" 4
      ]
    }

    This request filters for logs:

    1 Associated with the STREAMING_EXTENSION stage.
    2 For the UPDATE operation, which represents the final processing of an individual item in the index.
    3 Scoped to your specific source (<SOURCE_ID>).
    4 With a result status of WARNING. By querying only for warnings, you can assume that the absence of a result means success.
  2. Review the response:

    • If the response is an empty array, no warnings were generated, and all items in your batch were successfully indexed.

    • If the response contains log entries, each entry represents an item that failed to update. The meta.error field will explain the reason for the failure. For example:

      {
        "guid": "{GUID}",
        "id": "{DOCUMENT_ID}",
        "task": "STREAMING_EXTENSION",
        "operation": "UPDATE",
        "result": "WARNING",
        "meta": {
          "orderingid": 0, 1
          "error": "The document could not be modified, as it is missing a base value [...]" 2
        }
      }
      1 The orderingid will either be 0 if the item didn’t exist prior to the update, or it will reflect the orderingId of the last successful update for that item.
      2 The meta.error field provides details about why the update failed, helping you identify and correct the issue.