Query pipeline performance

The Overview tab on the Query Pipelines (platform-ca | platform-eu | platform-au) page in the Coveo Administration Console provides a seven-day snapshot of your query pipeline's performance. This snapshot is created using key metrics gathered from Coveo Usage Analytics (Coveo UA). The goal of a well-tuned query pipeline is to return relevant results to users' queries. Analyzing these metrics on a regular basis helps you assess whether the pipeline is meeting its relevance and performance goals.

This tab includes the Relevance and Performance subtabs. They each provide different metric charts that help you assess the pipeline’s current effectiveness in terms of user engagement and efficiency. This is especially useful when you want to evaluate the impact of recent changes to the pipeline, such as adding or editing rules or Coveo Machine Learning (Coveo ML) model associations.

Example

While reviewing the Overview tab of your query pipeline, you notice that the clickthrough rate is lower than expected. Given a recent influx of newly indexed items, you suspect the pipeline isn’t returning the most relevant results for certain queries.

To address this, you create a result ranking rule that boosts items added after a specific date. Within a few days of adding this new rule, you notice that the clickthrough rate begins to rise, suggesting that your changes helped improve user engagement.

"Relevance" subtab

The Relevance subtab contain the Average click rank, Clickthrough rate, Total Searches, and Searches with clicks metric charts. It also contains the How to optimize relevance? section, which provides suggestions for improving the relevance of your results.

Relevance subtab in Query pipeline overview | Coveo

1 Average click rank: This chart displays the Average Click Rank (ACR) which measures the average position of the clicked items in the search results. The lower the average click rank, the more relevant the results are, since it indicates that users are clicking on results that appear higher in the list.

2 Clickthrough rate: This chart displays the clickthrough rate, which measures the percentage of searches in which users clicked one or more results. This metric is calculated by adding the total number of search events that were followed by one or more click events, divided by the total number of search events. It’s a good indicator of how engaging or relevant your results are.

3 Total searches and Searches with clicks: This combined chart compares the total number of searches to the total number of searches that resulted in clicks. It helps you further understand the Clickthrough rate metric since it provides a visual breakdown of the number of searches that resulted in clicks versus those that did not.

4 How to optimize relevance?: This section provides suggestions of how you can improve the relevance of your results with ML models. It only appears if your query pipeline isn’t associated with any Automatic Relevance Tuning (ART) or Query Suggestion (QS) models. This section shows you which existing models can be associated with the query pipeline. You have the option to either associate existing models to the query pipeline or create a new model.

Tip
Tip

You can dismiss the model association alert for a given model type by clicking minus next to the suggested model type.

"Performance" subtab

The Performance subtab contains the following metrics: Total queries and Query response time.

Query pipeline overview tab - Performance subtab | Coveo

1 Total queries: The total number of queries received from both user searches and automated requests that trigger a query. This metric is useful for understanding the volume of queries your query pipeline handles. This includes all queries, regardless of whether they returned results or not.

2 Query response time: The average time your search interface takes to return results. This metric is useful in understanding the speed and efficiency of your query pipeline. A lower query response time indicates a more efficient pipeline, while a higher response time could affect other metrics, such as total queries.

What’s next?

To evaluate the performance of your query pipeline following rule or model association changes over a longer period of time, you can use the A/B Test feature to test different configurations.