Test Coveo Machine Learning Models
When you have the required privileges, you can use the Model Testing page of the Administration Console to compare two Coveo Machine Learning (Coveo ML) models of the same type together or compare an Automatic Relevance Tuning (ART) model with the default ranking by performing queries and reviewing the returned results.
To take advantage of the Model Testing page, your Coveo organization must contain at least:
Two active QS models
One active ART model
Test Machine Learning Models
Access the Model Testing page.
In the left-hand Model drop-down menu, select an active machine learning model. If the model you want to test is grayed and irresponsive, it means that the model isn’t in an Active state. See “Status” Column for more information on model statuses.
In the left-hand Pipeline drop-down menu, optionally change the Empty query pipeline for a pipeline for which you want to test the results a pipeline-model combination would provide if associated together (if not already the case).
In the right-hand drop-down menus, select the pipeline-model combination that you want to compare with the one previously selected.
In the Model drop-down menu, depending on the first selected model:
If you selected an ART model, select another active ART model or the Default results (index ranking only) option. The Default results (index ranking only) section returns results based on the default ranking score only (no query pipeline rules are applied).
If you selected a QS model, select another active QS model.
(Optional) Click Edit to show additional parameters, and then modify the default values (see Additional Parameters reference).
The additional parameters only impact the results returned by the models, meaning that the parameters don’t affect the results returned by the index (when the Default results (index ranking only) option is selected).
Model conditions and training dataset can be impacted by the advanced parameter values.
For example, if model A is only applied to the community search hub, and you select the case creation page in the Origin 1 (Page/Hub) drop-down menu, model A wouldn’t return results.
In the search box, enter a test query, and then press
Enteror click the search button.
(For ART model testing only) When you want to review the ranking weights of all returned search results, select the Detailed view check box OR click the result card to review the ranking weights of a particular result (see Detailed View Reference).
Under Language, select the language in which the tested models will recommend results. The default value is English.
When the selected models are built with data in many languages, only the languages shared by both selected models are selectable (if any).
Origin 1 (Page/Hub)
Under Origin 1 (Page/Hub), select the search hub or page from which the tested models will recommend results, or select None if you don’t want to filter on a specific search hub.
Origin 2 (Tab/Interface)
Under Origin 2 (Tab/Interface), select the search tab or interface from which the tested models will recommend results, or select None if you don’t want to filter on a specific tab or interface
In the Advanced Query box, optionally enter an advanced query expression including special field operators to further narrow the search results recommended by the tested models (see Advanced Field Queries).
+++When you’re a member of the Administrators [built-in group](/en/1980/#built-in-groups) (having the View all content privilege enabled), by default you will only see search results for source items that you’re authorized to see. This means that you may not see some items of a source or even entire sources.
As a member of the Administrators group, to allow you to troubleshoot search issues, you can temporarily bypass these item permissions by selecting the View all content check box.+
When this option is selected, you can visualize the following information:
The total weight of the selected item.
Refined Query Terms Weight
The score given by the ART model to the most relevant queried keywords.
The score proportion given by the ranking factors such as the item last modification date and the item location used by the index to evaluate the relevance score of each search result for each query (see About Ranking).
The score proportion given by the ART model based on the ranking modifier set in the model configuration, and the end-user query and search result click behavior.
The score proportion given by language computing, which evaluates the presence of query terms in item titles and descriptions.
|Action||Service - Domain||Required access level|
Machine Learning - Models
Search - Execute queries