Testing Coveo Machine Learning Models

When you have the required privileges, you can use the Model Testing page of the Administration Console to compare two Coveo Machine Learning (Coveo ML) models of the same type together or compare an Automatic Relevance Tuning (ART) model with the default ranking by performing queries and reviewing the returned results.

Prerequisites

To take advantage of the Model Testing page, your Coveo organization must contain at least:

  • Two active QS models

    OR

  • One active ART model

Test Machine Learning Models

  1. Access the Model Testing page.

    1953-ModelTestPage

  2. In the left-hand Model drop-down menu, select an active machine learning model. If the model you want to test is grayed and irresponsive, it means that the model isn’t in an Active state. See “Status” Column for more information on model statuses.

  3. In the left-hand Pipeline drop-down menu, optionally change the Empty query pipeline for a pipeline for which you want to test the results a pipeline-model combination would provide if associated together (if not already the case).

  4. In the right-hand drop-down menus, select the pipeline-model combination that you want to compare with the one previously selected.

    In the Model drop-down menu, depending on the first selected model:

    • If you selected an ART model, select another active ART model or the Default results (index ranking only) option.

      The Default results (index ranking only) section returns results based on the default ranking score only (no query pipeline rules are applied).

    • If you selected a QS model, select another active QS model.

  5. (Optional) Click Edit to show additional parameters, and then modify the default values (see Additional Parameters reference).

    AdditionalParameters1

    • The additional parameters only impact the results returned by the models, meaning that the parameters don’t affect the results returned by the index (when the Default results (index ranking only) option is selected).

    • Model conditions and training dataset can be impacted by the advanced parameter values.

      For example, if model A is only applied to the community search hub, and you select the case creation page in the Origin 1 (Page/Hub) drop-down menu, model A wouldn’t return results.

  6. In the search box, enter a test query, and then press Enter or click the search button.

  7. (For ART model testing only) When you want to review the ranking weights of all returned search results, select the Detailed view check box OR click the result card to review the ranking weights of a particular result (see Detailed View Reference).

    detailedView1

  • In your browser address bar, copy the URL and share the link to any colleagues who have the required privileges to test models within your organization.

    The shared link contains the tested models, the specified query parameters (Language, origin level 1 (Page/Hub), origin level 2 (Tab/Interface), and advanced query (aq)), and the detailed view state (activated or not) during the comparison.

  • You can use the browser previous and forward buttons to navigate between tested queries.

Leading Practices

Review Suggested Results for Empty Queries

  • When testing ART models, launch an empty query and review the five most suggested items.

  • When testing QS models, launch an empty query and review the five most suggested queries.

Do Not Modify a Model Configuration in the Production Query Pipeline

  • You can duplicate a machine learning model, associate it with another query pipeline, and then tweak a model parameter value to test whether it improves relevance.

  • When testing a change in a model configuration, if you’re satisfied with the results, associate the model copy containing the change with the production pipeline, and then dissociate the original model.

  • When comparing an ART model with the default ranking, if you’re satisfied with the model relevance, associate the ART model with the production pipeline (see Associate a Model With a Query Pipeline).

Reference

Additional Parameters

Organization Version Index

Under Organization version index, select the index from which the tested models will recommend results. The default value is Coveo Cloud V2.

The Organization version index parameter is only available for Coveo organizations with indexes in both Coveo Cloud versions (V1 and V2).

Language

Under Language, select the language in which the tested models will recommend results. The default value is English.

When the selected models are built with data in many languages, only the languages shared by both selected models are selectable (if any).

Origin 1 (Page/Hub)

Under Origin 1 (Page/Hub), select the search hub or page from which the tested models will recommend results, or select All of them.

  • For ART models, only the hubs and pages that the model supports for the selected language are shown in the drop-down menu (see ART “Language” Section). However, for QS models, you must ensure the hubs and pages you select are supported for the selected language to receive suggestions (see QS “Language” Section).

  • When the selected models are built with data from many hubs or pages, only the hubs and pages shared by both selected models are selectable (if any).

Origin 2 (Tab/Interface)

Under Origin 2 (Tab/Interface), select the search tab or interface from which the tested models will recommend results, or select All of them.

  • For ART models, only the tabs and interfaces that the model supports for the selected language are shown in the drop-down menu (see ART “Language” Section). However, for QS models, you must ensure the tab and interface you select is supported for the selected language to receive suggestions (see QS “Language” Section).

  • When the selected models are built with data from many tabs or interfaces, only the tabs and interfaces shared by both selected models are selectable (if any).

Advanced Query

In the Advanced Query box, optionally enter an advanced query expression including special field operators to further narrow the search results recommended by the tested models (see Advanced Field Queries).

(@audience==Developer)

Content Settings

When you’re a member of the Administrators built-in group (having the View all content privilege enabled), by default you will only see search results for source items that you’re authorized to see. This means that you may not see some items of a source or even entire sources.

As a member of the Administrators group, to allow you to troubleshoot search issues, you can temporarily bypass these item permissions by selecting the View all content check box.

view-all-content

Detailed View

When this option is selected, you can visualize the following information:

DetailedViewInfo3

Ranking Weights

The total weight of the selected item.

Refined Query Terms Weight

The score given by the ART model to the most relevant queried keywords.

Index Score

The score proportion given by the ranking factors such as the item last modification date and the item location used by the index to evaluate the relevance score of each search result for each query (see Understanding Ranking).

Machine Learning

The score proportion given by the ART model based on the ranking modifier set in the model configuration, and the end-user query and search result click behavior.

Keyword Weight

The score proportion given by language computing, which evaluates the presence of query terms in item titles and descriptions.

Required Privileges

By default, members of the Administrators and Relevance Managers built-in groups can test Coveo ML models using the Model Testing page.

The following table indicates the required privileges to test Machine Learning models (see Privilege Management and Privilege Reference).

Action Service - Domain Required access level
Test models

Machine Learning - Models

View

Search - Execute queries

Allowed
Recommended Articles