Review Coveo Machine Learning model information
Review Coveo Machine Learning model information
The Information tab of a model allows members with the required privileges to understand the learning process of a specific Coveo Machine Learning (Coveo ML) model. You can use this tab to review model information, such as the number of items known per search hub and samples of the top user queries.
To access the Information tab, access the Models (platform-ca | platform-eu | platform-au) page of the Coveo Administration Console, click the desired model, and then click Open in the Action bar.
"Error" section
If the status of your model is Degraded or Failed, an Error section is displayed under the Information tab. This section contains additional information to help you troubleshoot your Coveo ML model.

Filter information by language
You can review language specific information and statistics such as the number of candidates and candidate examples to ensure that the model is behaving as expected (see Reference).
On the subpage of your model, under Language, click the drop-down menu, and then select the desired language.
Reference
On the subpage of your model, you can review model candidates, fields, associated pipelines, and statistics, depending on the model type.
Automatic Relevance Tuning (ART)
ART "General" section
Information | Definition |
---|---|
Model type |
The type of model the user is inspecting. |
Model ID |
The unique identifier of the model. |
Content ID keys |
The field used by the model to identify index items (e.g., |
|
Note
If some of the information listed in the table doesn’t appear in the General section, it’s because this information isn’t available for your model. |
ART "Associated Pipelines" section
The section lists the query pipelines associated with the model.
Next to each pipeline card, you can click , and then select one of the following options:
-
Edit association
-
Dissociate
Depending on your selection:
-
If you selected Edit association, on the Edit a Model Association subpage, make the desired changes, and then click Save (see ART advanced configuration options).
-
If you selected Dissociate, in the Dissociate From Pipeline dialog, click Dissociate model.
ART "Model Building Statistics" section
Statistic | Definition |
---|---|
Search events |
The total number of search events used in the model creation. |
Click events |
The total number of click events used in the model creation. |
Visits |
The total number of visits used in the model creation. |
Learned queries |
The number of unique queries for which the model can recommend items. |
ART "Language" section
When reviewing this section, use the drop-down menu to filter information by the languages for which the model can make recommendations.
Statistic | Definition |
---|---|
Learned queries |
The number of unique queries for which the model can recommend items per language. |
Top queries |
The sample of the top queries (maximum 10) for which the model could recommend items. |
Known words |
The number of words known by the model. |
Items per filter value |
The number of items that can be recommended, for each filter (e.g., country, region, hub, interface, tab) known by the model, for a query. |
Stop words |
The number of words removed from user queries before recommending items. |
Content Recommendations (CR)
CR "General" section
Information | Definition |
---|---|
Model type |
The type of model the user is inspecting. |
Model ID |
The unique identifier of the model. |
Content ID keys |
The field used by the model to identify index items (e.g., |
Learned recommendations |
The number of unique events that the model can recommend. |
Learned recommendations per language |
The number of items that can be recommended per language. |
Possible recommendations per context key |
The sample of the top queries for which the model could recommend items for each listed context key. |
|
Note
If some of the information listed in the table doesn’t appear in the General section, it’s because this information isn’t available for your model. |
CR "Associated Pipelines" section
The section lists the query pipelines associated with the model.
Next to each pipeline card, you can click , and then select one of the following options:
-
Edit association
-
Dissociate
Depending on your selection:
-
If you selected Edit association, on the Edit a Model Association subpage, make changes to the applied Condition, and then click Save.
-
If you selected Dissociate, in the Dissociate From Pipeline dialog, click Dissociate model.
CR "Model Building Statistics" section
Statistic | Definition |
---|---|
Search events |
The total number of search events used in the model creation. |
Click events |
The total number of click events used in the model creation. |
View event count |
The total number of view events used in the model creation. |
Query Suggestions (QS)
QS "General" section
Information | Definition |
---|---|
Model type |
The type of model the user is inspecting. |
Model ID |
The unique identifier of the model. |
Suggestions per filter |
The number of queries that can be suggested for each filter (e.g., country, region, hub, interface, tab) known by the model. |
Context keys |
The context keys that the model can use to provide personalized query suggestions. |
|
Note
If some of the information listed in the table doesn’t appear in the General section, it’s because this information isn’t available for your model. |
QS "Associated Pipelines" section
The section lists the query pipelines associated with the model.
Next to each pipeline card, you can click , and then select one of the following options:
-
Edit association
-
Dissociate
Depending on your selection:
-
If you selected Edit association, on the Edit a Model Association subpage, make changes to the applied Condition, and then click Save.
-
If you selected Dissociate, in the Dissociate From Pipeline dialog, click Dissociate model.
QS "Model Building Statistics" section
Statistic | Definition |
---|---|
Search events |
The total number of search events used in the model creation. |
Click events |
The total number of click events used in the model creation. |
Learned suggestions |
The number of unique queries that the model can suggest. |
QS "Language" section
When reviewing this section, use the drop-down menu to filter information by the languages for which the model can make recommendations.
Statistic | Definition |
---|---|
The minimal number of clicks on a query suggestion that’s required for a candidate to remain in the model. The minimum is determined automatically depending on the language and the query count (see the reference table in Reviewing Coveo Machine Learning Query Suggestion datasets). |
|
Learned suggestions |
The number of unique queries that the model can suggest. |
Top suggestions |
The sample of the top queries (maximum 10) that the model could suggest. |
Dynamic Navigation Experience (DNE)
DNE "General" section
Information | Definition |
---|---|
Model type |
The type of model the user is inspecting. |
Model ID |
The unique identifier of the model. |
|
Note
If some of the information listed in the table doesn’t appear in the General section, it’s because this information isn’t available for your model. |
DNE "Associated Pipelines" section
The section lists the query pipelines associated with the model.
Next to each pipeline card, you can click , and then select one of the following options:
-
Edit association
-
Dissociate
Depending on your selection:
-
If you selected Edit association, on the Edit a Model Association subpage, make the desired changes, and then click Save (see DNE advanced configuration options).
-
If you selected Dissociate, in the Dissociate From Pipeline dialog, click Dissociate model.
DNE "Model Building Statistics" section
Statistic | Definition |
---|---|
Search events |
The total number of search events used in the model creation. |
Click events |
The total number of click events used in the model creation. |
Visits |
The total number of visits used in the model creation. |
Facet selection events |
The total number of facet selection events used in the model creation. |
Learned queries |
The number of unique queries for which the model can recommend items. |
DNE "Language" section
When reviewing this section, use the drop-down menu to filter information by the languages for which the model can make recommendations.
Information | Definition |
---|---|
Learned queries |
The number of unique queries per language for which the model can recommend items per language. |
Top facets |
The sample of the top facets per language for which the model can automatically select and reorder values. |
Facets per filter value |
The number of items that can be recommended, for each filters (e.g., country, region, hub, interface, tab) known by the model per language. |
DNE "Facet Autoselect" section
You can review information about the behavior of the Facet Autoselect feature for a specific DNE model.
-
On the Models (platform-ca | platform-eu | platform-au) page, click the DNE model for which you want to review information about the Facet Autoselect feature, and then click Open in the Action bar.
-
On the page that opens, select the Configuration tab.
-
In the Facet Autoselect section, you can review the following information:
-
Whether the Facet Autoselect feature is enabled for the model that you are inspecting.
-
The Facet (fields) to which the automatic selection of facet values apply.
-
The Sources in which the items matching the selected Facets are taken into account by the Facet Autoselect feature.
-
Product Recommendations (PR)
PR "General" section
Information | Definition |
---|---|
Model type |
The type of model the user is inspecting. |
Model ID |
The model unique identifier used to troubleshoot issues (if any). |
Content ID keys |
The field used by the model to identify index items (e.g., |
|
Note
If some of the information listed in the table doesn’t appear in the General section, it’s because this information isn’t available for your model. |
PR "Associated Pipelines" section
The section lists the query pipelines associated with the model.
Next to each pipeline card, you can click , and then select one of the following options:
-
Edit association
-
Dissociate
Depending on your selection:
-
If you selected Edit association, on the Edit a Model Association subpage, make changes to the applied Condition, and then click Save (see PR strategies options).
-
If you selected Dissociate, in the Dissociate From Pipeline dialog, click Dissociate model.
PR "Model Building Statistics" section
This section shows the total number of event types used to build the model. The higher the numbers are for each event type, the better.
"General events" section
Statistic | Definition |
---|---|
Search events |
The total number of search events used in the model creation. |
Click events |
The total number of click events used in the model creation. |
View events |
The total number of view events used in the model creation. |
Custom events |
The total number of custom events used in the model creation. |
"Commerce events" section
Statistic | Definition |
---|---|
Product details views |
The total number of |
Purchased products |
The total number of |
Product quick views |
The total number of |
PR "Strategy statistics" section
This section shows key statistics for all strategies used by the model.
To see key statistics for a specific strategy, under Strategy statistics, in the drop-down menu, select the desired strategy.
Cart recommender
If you selected the Cart recommender strategy, the following information is available:
Statistic | Definition |
---|---|
Most recommended SKUs |
Sample lists of product SKUs contained in the top shopping carts that the model can recommend. |
Recommended products |
The number of items that the model can recommend. |
Other PR strategies
If you selected the User recommender, Frequently viewed together, Frequently bought together, Popular items (viewed), or Popular items (bought) strategy, the following information is available:
Statistic | Definition |
---|---|
Most recommended SKUs |
A sample of the top product SKUs known by the model. |
Recommended products |
The number of items that the model can recommend. |
Smart Snippets
Smart Snippets "Snippets" section
Information | Definition | |
---|---|---|
Items with snippets |
Among the items scoped to train the model, the percentage of items for which the model was able to extract snippets. |
|
Snippets available to display |
The total number of snippets extracted by the model that can be displayed in search interfaces. |
|
HTML headers in your items |
Among the items scoped to train the model, the total number of headers that the model can use to identify questions. |
|
Snippets per item |
Average |
The average number of snippets per item for which snippets were extracted. |
Min |
The number of snippets contained in the item that generated the least snippets. |
|
Max |
The number of snippets contained in the item that generated the most snippets. |
|
Average words per snippet |
The average snippet length. |
Smart Snippets "Items without snippets" section
This section shows the number of items that don’t contain snippets among the items that were made available to train the model.
The Proportion of items without snippets section shows the percentage of items that the model can access but for which it was unable to extract any snippets.
See the troubleshooting tips section for information on how you can improve those numbers.

Smart Snippets "Items included" section
This section provides additional information about the number of items used by the model along with the reasons some of them ended up not being used by the model.

By default, the table shows statistics for all sources that were selected during the model configuration process and contain at least one item. However, you can use the picker at the top of the section to target the statistics for the items contained in a specific source.
The Items included section displays the following information:
Information | Definition |
---|---|
Selected for the model |
The total number and proportion of items that the model can use. |
No HTML tags or JSON-LD |
The number and proportion of items in which the model couldn’t find JSON-LD or HTML tags. |
Missing an ID |
The number and proportion of items that don’t use the |
Duplicates |
The number and proportion of items that are duplicates of items already used by the model and that were filtered out from the model training. |
Used by the model |
The number and proportion of items that are used by the model out of the total items that the model can use. |
Smart Snippets "General" section
Information | Definition |
---|---|
Model type |
The type of model algorithm (i.e., Smart Snippets). |
Model ID |
The unique identifier of the model. |
|
Note
If some of the information listed in the table doesn’t appear in the General section, it’s because this information isn’t available for your model. |
Smart Snippets “Associated Pipelines” section
The section lists the query pipelines associated with the model.
Next to each pipeline card, you can click , and then select one of the following options:
-
Edit association
-
Dissociate
Depending on your selection:
-
If you selected Edit association, on the Edit a Model Association subpage, make changes to the applied Condition, and then click Save.
-
If you selected Dissociate, in the Dissociate From Pipeline dialog, click Dissociate model.
Smart Snippets troubleshooting tips
The Items without snippets and Items included sections indicate the reasons why some of the items you scoped to train the model were excluded from the training process.
This section provides tips to help you troubleshoot the content you selected to train the model and improve your model’s performance.
To improve the numbers listed in the Items without snippets and Items included sections, we recommend that you verify the following in the items selected to train the model:
If you use JSON-LD
-
When using JSON-LD to format your content for the Smart Snippets model, we recommend that you inspect the fields in which the content formatted in JSON-LD appears. Compare your content with Google’s standards to make sure the correct format is used. You can use the Content Browser (platform-ca | platform-eu | platform-au) to review the properties of an item.
-
If you’re using an indexing pipeline extension (IPE) to generate JSON-LD in the items used to train the model, we recommend that you verify the extension’s configuration and the output it has on the items you select to train the model. The Log Browser (platform-ca | platform-eu | platform-au) provides useful information about the items impacted by an IPE.
|
Leading practice
If you have to modify the items that are scoped for model training, we recommend that you create a new model once the source is rebuilt with the new version of the items. This ensures that the model only uses the most recent version of the items. |
If you use raw HTML
When relying on raw HTML to format your content for the Smart Snippets model, we recommend that you verify the following:
-
If you use CSS exclusions in your model, review the list of CSS exclusions and verify if they block the use of relevant HTML content from your items.
-
If you specified fields to target the content to be used when creating the model, ensure that these field aren’t empty.
-
To ensure that the model can accurately establish relationships between questions and answers, it’s important to properly format the HTML content. Questions should be formatted using HTML headers (
<h>
tags), with answers appearing immediately below the corresponding header in the HTML code of the page. Smart Snippet models are better at parsing answers when they appear in HTML paragraphs (<p>
tags).
|
Leading practice
If you have to modify the items that are scoped for model training, we recommend that you create a new model once the source is rebuilt with the new version of the items. This ensures that the model only uses the most recent version of the items. |
Case Classification (CC)
CC "General" section
Information | Definition |
---|---|
Model type |
The type of model the user is inspecting. |
Model ID |
The unique identifier of the model. |
Model version |
The version of the model followed by the date of the model’s last update in UNIX timestamp format. |
Engine version |
The version number of the learning algorithm that was used to build the model. |
Content ID keys |
The field used by the model to identify index items (e.g., |
|
Note
If some of the information listed in the table doesn’t appear in the General section, it’s because this information isn’t available for your model. |
CC "Model Performance" section
The Model performance section indicates the model’s capacity to predict values for specific index fields.
To achieve this, the model applies classifications on the Test data set, and then compare the results with the classifications that were manually assigned by previous users and learned by the model when building on the Training data set.
Since the model learned from manual classifications during its training phase, it learned which classifications were right for a given case.
By attempting to provide classifications on the Test data set, which was ignored by the model during its training phase, the model can compare the attempted classifications to those that were manually classified by previous users on the Training data set.
In the Model performance section, this information is displayed in the Top prediction is correct and Correct prediction in top 3 columns.

The Top prediction is correct column indicates the percentage of time the model’s top value prediction was the same as what the model learned from the Training data set.
The Correct prediction in top 3 column indicates the percentage of time one of the model’s top 3 value predictions matches the value learned from the Training data set.
CC "Data Sets Distribution" section
During its training phase, the model splits all available cases into two data sets: the training data set and the test data set.
The model’s training data set represents 90% of all the cases that were selected when configuring the model. The model builds on this segment to learn from the classifications that were manually applied by previous users.
The model’s test data set represents the other 10% of the cases that were selected when configuring the model. Unlike the training data set, the model doesn’t use the information contained in these cases to train itself. This data set is rather used to evaluate the model performance.
The Data sets distribution section lists the index fields for which the model learned classifications for.
This information is displayed in the Training data set and Test data set columns.

The Training data set column indicates the number of cases containing a specific field that was used to train the model.
The Test data set column indicates the number of cases that contain this specific field in the model’s test data set.
Each row of this table can be expanded to obtain further information about the values that the model can predict for a specific index field.
When expanding a specific row, the Sample value distribution column lists the values that can be predicted for a specific field (e.g., product_type
), along with the number of times a specific value was used to classify available cases in both the Training data set and the Test data set.

Required privileges
By default, members with the required privileges can view and edit elements of the Models (platform-ca | platform-eu | platform-au) page.
The following table indicates the privileges required to use elements of the Models page and associated panels (see Manage privileges and Privilege reference).
Action | Service - Domain | Required access level |
---|---|---|
View models |
Machine Learning - Models |
View |
Edit models |
Organization - Organization |
View |
Machine Learning - Models |
Edit |