--- title: Monitor the status of Stream API operations slug: p8tb2371 canonical_url: https://docs.coveo.com/en/p8tb2371/ collection: coveo-for-commerce source_format: adoc --- # Monitor the status of Stream API operations When you send content to a [Catalog source](https://docs.coveo.com/en/l5if0244/) using the Stream API, the API's initial successful response confirms that the data was received, but it doesn't guarantee that all the [items](https://docs.coveo.com/en/pa8f6515/) will be successfully processed and indexed. While you can manually inspect an operation's status in the [**Log Browser**](https://platform.cloud.coveo.com/admin/#/orgid/logs/browser/) ([platform-ca](https://platform-ca.cloud.coveo.com/admin/#/orgid/logs/browser/) | [platform-eu](https://platform-eu.cloud.coveo.com/admin/#/orgid/logs/browser/) | [platform-au](https://platform-au.cloud.coveo.com/admin/#/orgid/logs/browser/)), this guide describes the API-driven process for programmatically tracking your updates in automated workflows. For broader monitoring of Stream API indexing across your Coveo organization, see [Monitor system performance](https://docs.coveo.com/en/pbib2006/). Monitoring the status of any stream operation is a two-step process that involves checking two distinct stages of the indexing pipeline: . Verify that the entire batch of items was accepted. . Verify that each item was successfully updated in the index. The examples in this article use the following endpoint from the Coveo [Logs API](https://docs.coveo.com/en/16/api-reference/source-logs-api#tag/Logs/paths/~1organizations~1%7BorganizationId%7D/post): ```http POST https://api.cloud.coveo.com/logs/v1/organizations/?from=&to= HTTP/1.1 ``` Where: * `` is the unique identifier of your Coveo organization. To learn how to find the organization ID, see [Find your organization ID](https://docs.coveo.com/en/n1ce5273/). * `` and `` define the time range for the logs you want to retrieve. Use the [W3C date and time format](https://www.w3.org/TR/NOTE-datetime) (for example, `2025-10-01T00:00:00Z`). ## Step 1: Verify batch acceptance The first stage of verification is to confirm that the JSON payload containing your batch of items was successfully received and considered valid. This check applies to all types of stream operations, including both [full](https://docs.coveo.com/en/p4eb0129/) and [partial](https://docs.coveo.com/en/p4eb0515/) updates. Each operation is assigned a unique `orderingId`, which is returned in the response to your Stream API request. To check the status of a batch operation . Make a POST request to the Logs API with the following JSON body. ```json { "tasks": [ "STREAMING_EXTENSION" <1> ], "operations": [ "BATCH_FILE" <2> ], "sourcesIds": [ "" <3> ], "results": [ "COMPLETED", <4> "WARNING" ] } ``` This request filters for logs: <1> Associated with the [`STREAMING_EXTENSION`](https://docs.coveo.com/en/1893#streaming-extension) stage of the indexing pipeline. <2> For the `BATCH_FILE` operation, which represents the submission and initial validation of a batch of items. <3> Scoped to your specific source (``). <4> With a result status of either `COMPLETED` or `WARNING`. . The response contains all batch operations that match the filter criteria within the specified time range. To track a specific operation, find the log entry where the `meta.orderingid` field matches the `orderingId` from your request to the Stream API. ** If the `result` is `COMPLETED`, the batch was accepted successfully. ** If the `result` is `WARNING`, some operations were invalid. The `meta.error` field provides details about the validation error to help you identify and correct the issue. For example: ```json { "guid": "{GUID}", "id": "{DOCUMENT_ID}", "task": "STREAMING_EXTENSION", "operation": "BATCH_FILE", "result": "WARNING", "meta": { "orderingid": 1755399334464, "mode": "StreamChunk", "error": "The document AddOrUpdate was skipped due to the new document being '424.89 KB' over the limit of '3 MB'." } } ``` ## Step 2: Verify document indexing After confirming that your batch was accepted, the second stage is to verify that each item was successfully applied and updated in the index. Even if a batch is valid, individual item updates can fail during this final stage. This can happen for several reasons, such as: * An item exceeded the maximum size limit after being resolved. * A partial update was sent for an item that doesn't exist in the source. * An item is missing required metadata. To check for indexing failures . Make a POST request to the Logs API with the following JSON body: ```json { "tasks": [ "STREAMING_EXTENSION" <1> ], "operations": [ "UPDATE" <2> ], "sourcesIds": [ "" <3> ], "results": [ "WARNING" <4> ] } ``` This request filters for logs: <1> Associated with the `STREAMING_EXTENSION` stage. <2> For the `UPDATE` operation, which represents the final processing of an individual item in the index. <3> Scoped to your specific source (``). <4> With a result status of `WARNING`. By querying only for warnings, you can assume that the absence of a result means success. . Review the response: ** If the response is an empty array, no warnings were generated, and all items in your batch were successfully indexed. ** If the response contains log entries, each entry represents an item that failed to update. The `meta.error` field will explain the reason for the failure. For example: ```json { "guid": "{GUID}", "id": "{DOCUMENT_ID}", "task": "STREAMING_EXTENSION", "operation": "UPDATE", "result": "WARNING", "meta": { "orderingid": 0, <1> "error": "The document could not be modified, as it is missing a base value [...]" <2> } } ``` <1> The `orderingid` will either be `0` if the item didn't exist prior to the update, or it will reflect the `orderingId` of the last successful update for that item. <2> The `meta.error` field provides details about why the update failed, helping you identify and correct the issue.