Testing Coveo Machine Learning Models

When you have the required privileges, you can use the Model Testing page of the administration console to compare two Coveo Machine Learning (Coveo ML) models of the same type together or compare an Automatic Relevance Tuning (ART) model with the default ranking by performing queries and reviewing the returned results.

Prerequisites

To take advantage of the Model Testing page, your Coveo Cloud organization must contain at least:

  • Two active QS models

    OR

  • One active ART model

Test Machine Learning Models

  1. Access the Model Testing page.

    1953-ModelTestPage

  2. In the left-hand Model drop-down menu, select an active machine learning model.

  3. In the left-hand Pipeline drop-down menu, optionally change the Empty query pipeline for a pipeline for which you want to test the results a pipeline-model combination would provide if associated together (if not already the case).

  4. In the right-hand drop-down menus, select the pipeline-model combination that you want to compare with the one previously selected.

    In the Model drop-down menu, depending on the first selected model:

    • If you selected an ART model, select another active ART model or the Default results (index ranking only) option.

      The Default results (index ranking only) section returns results based on the default ranking score only (no query pipeline rules are applied).

    • If you selected a QS model, select another active QS model.

  5. (Optional) Click the Edit button to show additional parameters, and then modify the default values (see Additional Parameters reference).

    AdditionalParameters

    • The additional parameters only impact the results returned by the models, meaning that the parameters do not affect the results returned by the index (when the Default results (index ranking only) option is selected).

    • Model conditions and training data set can be impacted by the advanced parameter values.

      If model A is only applied to the community search hub, and you select the case creation page in the Origin 1 (Page/Hub) drop-down menu, model A would not return results.

  6. In the search box, enter a test query, and then press Enter or click the search icon.

  7. (For ART model testing only) When you want to review the ranking weights of all returned search results, select the Detailed view check box OR click the result card to review the ranking weights of a particular result (see Detailed View Reference).

    detailedView1

  • In your browser address bar, copy the URL and share the link to any colleagues who have the required privileges to test models within your organization.

    The shared link contains the tested models, the specified query parameters (Language, origin level 1 (Page/Hub), origin level 2 (Tab/Interface), and advanced query (aq)), and the detailed view state (activated or not) during the comparison.

  • You can use the browser previous and forward buttons to navigate between tested queries.

Leading Practices

Review Suggested Results for Empty Queries

  • When testing ART models, launch an empty query and review the five most suggested items.

  • When testing QS models, launch an empty query and review the five most suggested queries.

Do Not Modify a Model Configuration in the Production Query Pipeline

Reference

Additional Parameters

Organization Version Index

Under Organization version index, select the index from which the tested models will recommend results. The default value is Coveo Cloud V2.

The Organization version index parameter is only available for Coveo Cloud organizations with indexes in both Coveo Cloud versions (V1 and V2).

Language

Under Language, select the language in which the tested models will recommend results. The default value is English.

When the selected models are built with data in many languages, only the languages shared by both selected models are selectable (if any).

Origin 1 (Page/Hub)

Under Origin 1 (Page/Hub), select the search hub or page from which the tested models will recommend results, or select All of them.

  • For ART models, only the hubs and pages that the model supports for the selected language are shown in the drop-down menu (see ART “Language” Section). However, for QS models, you must ensure the hubs and pages you select are supported for the selected language to receive suggestions (see QS “Language” Section).

  • When the selected models are built with data from many hubs or pages, only the hubs and pages shared by both selected models are selectable (if any).

Origin 2 (Tab/Interface)

Under Origin 2 (Tab/Interface), select the search tab or interface from which the tested models will recommend results, or select All of them.

  • For ART models, only the tabs and interfaces that the model supports for the selected language are shown in the drop-down menu (see ART “Language” Section). However, for QS models, you must ensure the tab and interface you select is supported for the selected language to receive suggestions (see QS “Language” Section).

  • When the selected models are built with data from many tabs or interfaces, only the tabs and interfaces shared by both selected models are selectable (if any).

Advanced Query

In the Advanced Query box, optionally enter an advanced query expression including special field operators to further narrow the search results recommended by the tested models (see Advanced Field Queries).

(@audience==Developer)

Detailed View

When this option is selected, you can visualize the following information:

DetailedViewInfo3

Ranking Weights

The total weight of the selected item.

Refined Query Terms Weight

The score given by the ART model to the most relevant queried keywords.

Index Score

The score proportion given by the ranking factors such as the item last modification date and the item location used by the index to evaluate the relevance score of each search result for each query (see Understanding Ranking).

Machine Learning

The score proportion given by the ART model based on the ranking modifier set in the model configuration, and the end-user query and search result click behavior.

Keyword Weight

The score proportion given by language computing, which evaluates the presence of query terms in item titles and descriptions.

Required Privileges

By default, members of the Administrators and Relevance Managers built-in groups can test Coveo ML models using the Model Testing page.

The following table indicates the required privileges to test Machine Learning models (see Privilege Management and Privilege Reference).

Action Service - Domain Required access level
Test models

Machine Learning - Models

View

Search - Execute queries

Allowed
Recommended Articles