Training ART Models by Linking Queries to Results

When leveraging Coveo Machine Learning (Coveo ML), manually adjusting the relevance of search results (e.g., creating thesaurus rules) becomes less necessary. Indeed, Coveo ML Automatic Relevance Tuning (ART) models recommend the same results for different queries when they contain synonyms simply by analyzing the usage analytics data of a specific search interface. However, you can help accelerate the learning process of an ART model by pointing synonyms as well as words behind acronyms.

  • The help you provide lasts as long as the data is used to train the Coveo ML model in the worst case scenario (see Data Period). Best case scenario, your search interface users repeat the actions you take to train the model.

    You point to the ART model that BO means business optimization and your model is based on three months of data. In 90 days, you may have to help the model understand the same thing if no users searching for BO and business optimization clicked the same results.

  • ART models do not know synonyms, but learn links from queries to clicks. Therefore, in the example above, an ART model would have learned the link between BO, business optimization, and the clicked results; thus removing the need for a thesaurus rule.

To help train Coveo Machine Learning ART models by linking queries to results

  1. Ensure that the Coveo ML ART feature is configured and enabled on your search interface (see Adding and Managing Coveo Machine Learning Models).

  2. In your search interface, in the search box, enter the acronym or a term, and then click an appropriate result.

  3. Enter the words behind the acronym or a synonym of the term entered in the previous step, and then click the same result.

  4. Repeat the procedure for each acronym or synonym.

  5. Access the search interface anonymously (if public) or ask colleagues to access it, and then perform or ask them to perform steps 2 to 4.

  6. Once the model is trained, on the Content Browser page, ensure that the model returns the expected results (see Inspect Items With the Content Browser). Repeat the procedure if needed.

    After the next (scheduled) model update, your model will recommend results based on your training.