Monitor the status of Stream API operations
Monitor the status of Stream API operations
When you send content to a Catalog source using the Stream API, the API’s initial successful response confirms that the data was received, but it doesn’t guarantee that all the items will be successfully processed, updated, or indexed.
While you can manually inspect the status of an operation in the Log Browser (platform-ca | platform-eu | platform-au) of the Coveo Administration Console, this guide describes the API-driven process. Use this method to programmatically verify the status of your stream operations in automated workflows, such as completion notifications or monitoring tools.
This article covers how to monitor the status of both full and partial catalog data updates sent via the Stream API. The examples in this article use the following endpoint from the Coveo Logs API:
POST https://api.cloud.coveo.com/logs/v1/organizations/<ORGANIZATION_ID>?from=<START_TIME>&to=<END_TIME> HTTP/1.1
Where:
-
<ORGANIZATION_ID>
is the unique identifier of your Coveo organization. To learn how to find the organization ID, see Find your organization ID. -
<START_TIME>
and<END_TIME>
define the time range for the logs you want to retrieve. Use the W3C date and time format (for example,2025-10-01T00:00:00Z
).
Inspect full catalog data updates
When sending a full catalog update, which is used to refresh individual items or replace the entire catalog data, you typically send a batch of items in a single request.
This data can include new items to add, existing items to update, or items to delete.
Each operation is assigned a unique orderingId
, which is returned in the response to the request that sent the file container to your source.
You can monitor the status of the items being processed by filtering the logs for specific statuses related to the batch operation.
To check the status of a batch operation
-
Make a
POST
request to the Logs API with the following JSON body.{ "tasks": [ "STREAMING_EXTENSION"
], "operations": [ "BATCH_FILE"
], "sourcesIds": [ "<SOURCE_ID>"
], "results": [ "COMPLETED",
"WARNING" ] }
This request filters for logs:
Associated with the STREAMING_EXTENSION
stage of the indexing pipeline.For the BATCH_FILE
operation, which represents the submission of a batch of items.Scoped to your specific source ( <SOURCE_ID>
).With a result status of either COMPLETED
orWARNING
. -
The response contains all batch operations that match the filter criteria and that occurred within the specified time range. To track the status of a specific operation, find the log entry where the
meta.orderingid
field matches the orderingId from your stream request.-
If the
result
isCOMPLETED
, the batch was accepted successfully. -
If the
result
isWARNING
, some operations were invalid. Themeta.error
field provides details about the validation error to help you identify and correct the issue. For example:{ "guid": "{GUID}", "id": "{DOCUMENT_ID}", "task": "STREAMING_EXTENSION", "operation": "BATCH_FILE", "result": "WARNING", "meta": { "orderingid": 1755399334464, "mode": "StreamChunk", "error": "The document AddOrUpdate was skipped due to the new document being '424.89 KB' over the limit of '3 MB'." } }
-
Inspect partial catalog data updates
When sending partial catalog data updates to modify specific fields of existing items, a successful API response indicates that the batch of updates was received and accepted for processing.
Even if a batch is accepted, individual item updates can fail during the final indexing stage. This can happen for several reasons, such as:
-
A partial update was valid, but after being resolved, the full item exceeded the maximum size limit.
-
A partial update was sent for an item that doesn’t exist in the source.
-
An item is missing required metadata.
To check for indexing failures
-
Make a
POST
request to the Logs API with the following JSON body:{ "tasks": [ "STREAMING_EXTENSION"
], "operations": [ "UPDATE"
], "sourcesIds": [ "<SOURCE_ID>"
], "results": [ "WARNING"
] }
This request filters for logs:
Associated with the STREAMING_EXTENSION
task.For the UPDATE
operation, which represents the updating of individual items.Scoped to your specific source ( <SOURCE_ID>
).With a result status of WARNING
. By querying only for warnings, you can assume that the absence of a result means success. -
Review the response:
-
If the response is an empty array, it means no warnings were generated. All your accepted updates from the specified time period were successfully applied to the index.
-
If the response contains log entries, each entry represents an item that failed to update. The
meta.error
field will explain the reason for the failure. For example:{ "guid": "{GUID}", "id": "{DOCUMENT_ID}", "task": "STREAMING_EXTENSION", "operation": "UPDATE", "result": "WARNING", "meta": { "orderingid": 0,
"error": "The document could not be modified, as it is missing a base value [...]"
} }
The orderingid
will either be0
if the item didn’t exist prior to the update, or it will reflect theorderingId
of the last successful update for that item.The meta.error
field provides details about why the update failed, helping you identify and correct the issue.
-