--- title: About search result ranking slug: '1624' canonical_url: https://docs.coveo.com/en/1624/ collection: searching-with-coveo source_format: adoc --- # About search result ranking _Result ranking_ is the process during which the [index](https://docs.coveo.com/en/204/) evaluates a distinct ranking score for each [item](https://docs.coveo.com/en/210/) that matches a [query](https://docs.coveo.com/en/231/), and then sorts the results from most to least relevant (that is, in descending score order). Coveo ranks search results by calculating a relevance score based on a series of ranking factors. The score spans from minus infinity to infinity. The higher the score, the higher the result will be in the result list. ## Relevance score The relevance score is a combination of the index ranking algorithm in action during the [index ranking phases](#index-ranking-phases), and other relevance modifiers such as [query ranking expressions (QREs)](https://docs.coveo.com/en/1472/) and query [ranking functions](https://docs.coveo.com/en/1448/). Members with the [required privileges](https://docs.coveo.com/en/3151/) can modify the relative weight of some index ranking factors by [adding query pipeline ranking weight rules](https://docs.coveo.com/en/3412/). > **Notes** > > * You can inspect the score of [items](https://docs.coveo.com/en/210/) using the Debug panel (see [Use the JavaScript Search Debug Panel](https://docs.coveo.com/en/434#rankinginfo-section)). > > * A relevance score is returned only when the results are sorted by `relevancy`. > If the results are sorted using other criteria, such as `date` or `field`, a score isn't calculated nor included in the results. ### Featured and ART-recommended results Because of their nature, featured results should always appear at the top of the result list, and Coveo Machine Learning (Coveo ML) ART-recommended results in the first ten results. The relevance score boost is higher for featured results than for ART-recommended results. However, both types of results can be affected by other ranking factors (for example, other query pipeline component rules such as query ranking expressions). ### Index ranking phases The mechanism behind the ranking process can be compared to a funnel. Starting with all items, the index receives a query from a user, isolates items in which the user identity can be found in the permission [groups](https://docs.coveo.com/en/202/) (see [Group and granted security identities](https://docs.coveo.com/en/1603/), [Permission sets](https://docs.coveo.com/en/2007/), and [Permission levels](https://docs.coveo.com/en/1526/)), and then only keeps the items that match the query. The ranking process is separated into four phases, each of them working on the items sorted by the preceding phase. Coveo natively uses [17 pre-tuned ranking weight factors](#pre-tuned-ranking-weight-factors) during these phases. Among the most important ones, the criteria with the biggest relevance impact are term proximity, item modified date (most recent), and term frequency. Each of these 17 criteria has been optimized over years of experience with a wide variety of indexed content to determine highly satisfying out-of-the-box relevance scores of items in most cases. You can still carefully tune these parameters when needed (see [Manage Query Pipeline Ranking Weights](https://docs.coveo.com/en/3412/)). You can also troubleshoot ranking when a factor score seems too high or too low [by using the JavaScript Search Debug Panel](https://docs.coveo.com/en/434#rankinginfo-section). > **Important** > > While you can use several parameters to tune the index ranking engine, make careful changes to prevent negative performance or ranking collateral effects. > Contact [Coveo Support](https://connect.coveo.com/s/case/Case/Default) to get recommendations to address your index ranking issues. ![Diagram showing different indexing phases](https://docs.coveo.com/en/assets/images/index-content/ranking-phases-diagram.svg) #### Phase 1: Term weighting The first phase attributes a score to items based on each term of the user query. Seven factors are used to rank the indexed items the user has permissions to access and match the query. These factors cover areas such as the location of the query terms in those items (in the title, in the summary, in the concepts, etc.) and the item language (same language as the user query or not). Once the ranking is done, the 50,000 highest scored items are kept. On top of these ranking factors, [query ranking expressions (QRE)](https://docs.coveo.com/en/3375/), which are custom expressions used to modify the ranking score by a specified amount when items match certain conditions, are taken into account during this phase. > **Notes** > > * A Coveo organization member with the required [privileges](https://docs.coveo.com/en/228/) can fine-tune the importance of each of the factors, but this should be done with care because it affects all results in all search interfaces (see [Manage ranking weight rules](https://docs.coveo.com/en/3412/)). > > * For each item, the score attributed for each factor is shown under **Term Weights** (see [Use the JavaScript Search Debug Panel](https://docs.coveo.com/en/434#rankinginfo-section)). #### Phase 2: Item weighting The second phase attributes a score to items based on their freshness (last modification date) and quality. This phase, which is performed on the first 50,000 items with the highest ranking scores returned by phase one, uses six ranking factors that cover areas such as the source rating (reputation from lowest to highest) to further adjust the relevance score of these 50,000 items. Once the ranking is done, the highest scored items are kept and the next ranking phase is performed on them. The number of items ranges from 100 to 400. > **Notes** > > * A Coveo organization member with the required privileges can fine-tune the importance of each of the factors, but this should be done with care because it affects all results in all search interfaces (see [Manage ranking weight rules](https://docs.coveo.com/en/3412/)). > > * This phase involves loading item-specific information such as if the items were modified recently. > > * For each item, the score attributed for each factor is shown under **Document Weights** (see [Use the JavaScript Search Debug Panel](https://docs.coveo.com/en/434#rankinginfo-section)). #### Phase 3: Term frequency-inverse item frequency (TF-IDF) The purpose of the third phase is to weight queried terms while taking their number of occurrences in items into account. The ranking engine evaluates the importance of a query term for an item based on the number of occurrences of this term in the item, but also inversely on the number of occurrences of the term in the index ([TF-IDF](https://en.wikipedia.org/wiki/Tf–idf)). The more frequent a term is in the index, the less informative the term becomes since the significance and meaning are to a certain extent diluted. **Example** A common term such as `product` is worth less than a rare one such as `iPhone`. Based on this methodology, each of the items returned from phase three receives an additional score, and then their ranks are adjusted accordingly. > **Notes** > > * For each item, the score attributed for **Frequency**, **Correlation**, and **TF-IDF** for each queried terms is shown under **Term weights** (see [Use the JavaScript Search Debug Panel](https://docs.coveo.com/en/434#rankinginfo-section)). > > * The index minimizes possible [stemming](https://docs.coveo.com/en/3436/) errors (from [phase 1](#phase-1-term-weighting)) by calculating a **Correlation** factor between the searched term and every possible expansion. > In search results, items containing highly correlated expansions are ranked higher than ones containing poorly correlated expansions. > > For example, when you search for `universe`, because of the way the stemming algorithm works, the index expands your query using terms from the `univer` stem classes that can include university. > When the terms universe and university rarely co-occur in your indexed items, items containing university are ranked lower. #### Phase 4: Adjacency ranking The last phase computes the proximity of query terms, giving more weight to items having the terms close together in the text. This step fine-tunes the order of the items received from phase 3 and, once the reordering is done, items are returned in the search interface to the user as a response to the submitted query. > **Notes** > > * Term proximity doesn't apply to queries with one term. > The number of terms the index uses ranges from 100 to 400. > > * For each item, when ranking information is enabled, the score attributed for **Adjacency** is shown under **Document Weights** (see [Use the JavaScript Search Debug Panel](https://docs.coveo.com/en/434#rankinginfo-section)). > > * The value of the `docID` is used to break ties (if any) and ensure that the same results order is respected if the same query is performed in the future. > items with the same ranking score are sorted in descending `docID` values order. This is how ranking is involved within relevancy. However, the ranking process isn't limited to these phases. Coveo comes with many features that further help fulfill your needs. Features that you can use to personalize or customize the way you want your items to be ranked. Coveo ML models and query pipelines are among other features influencing the relevance or search results (see [Coveo Machine Learning](https://docs.coveo.com/en/1727/) and [What's a query pipeline?](https://docs.coveo.com/en/1611/)). #### Pre-tuned ranking weight factors The following table indicates all the ranking factors taken into account out-of-the-box by the Coveo ranking engine at each phase of the ranking process: [%header,cols="1,3,4"] |=== |Phase |Ranking factor (Label in Debug panel) |Description .8+.^|[Phase 1](https://docs.coveo.com/en/1624#phase-1-term-weighting) |Term in title (Title) footnote:ranking-weight[Configurable in [ranking weight rules](https://docs.coveo.com/en/3412/).] |The presence of queried keywords in the title of the item. |Term in concepts (Concept) footnote:ranking-weight[] |The presence of queried keywords in the automatically populated `@concepts` field of the item. |Term in summary (Summary) footnote:ranking-weight[] |The presence of queried keywords in the summary of the item. |Terms in address (URI) footnote:coveo-support[Default value that's configurable with the help of [Coveo Support](https://connect.coveo.com/s/case/Case/Default).] |The presence of queried keywords in the URI of the item. |Term has formatting (Formatted) footnote:coveo-support[] |Whether queried keywords are formatted in the item (for example, heading level, bold, large, etc.). |Term casing (Casing) footnote:coveo-support[] |Whether queried keywords have a special casing in the item. |Term correlation within stemming classes (Relation) footnote:coveo-support[] |The presence of words with the same root as the queried keywords in the item. For example, if a user searches for `programmer`, Coveo performs a stemming extension and search the index for items matching `programmer`, `programmers`, `program`, `programming`, etc. Since `programmers` is closely related to the original query, items matching `programmers` will obtain a higher score than those matching `programming` for this ranking factor. |Item in user language (QRE) footnote:coveo-support[] |Whether the item is in the language of the search interface from which the query originates. .4+.^|[Phase 2](https://docs.coveo.com/en/1624#phase-2-item-weighting) |Item modified recently (Date) footnote:ranking-weight[] |Item last modification date. Items with the most recent modification date obtain a higher ranking. |Item quality evaluation (Quality) footnote:quality-value[Default constant value of 180 that can't be modified.] |The proximity of the item to the root of the indexed system. |Source rating (Source) footnote:source-value[Default constant value of 500 that can't be modified.] |The rating of the source the item resides in. |Custom ranking weight (Custom) footnote:custom-value[Default value of 7, with a default modifier value of 5. Can be customized from 0 to 15 with custom weight metadata.] |The custom weight assigned through an [indexing pipeline extension (IPE)](https://docs.coveo.com/en/206/) for the item. |[Phase 3](https://docs.coveo.com/en/1624#phase-3-term-frequency-inverse-item-frequency-tf-idf) |Term Frequency–Inverse Document Frequency (TF-IDF) footnote:ranking-weight[] |The number of times a queried keyword appears in a given item, offset by the number of items in the index containing that keyword (see [TF-IDF](https://en.wikipedia.org/wiki/Tf%E2%80%93idf)). |[Phase 4](https://docs.coveo.com/en/1624#phase-4-adjacency-ranking) |Term proximity (Adjacency) footnote:ranking-weight[] |The proximity of queried keywords to each other in the item. |=== > **Notes** > > * The scores of pre-tuned ranking weight factors are constant for all the queries a user enters on your search page. As a result, they don't affect other factors in determining the position of a query's search results. > > * The relative importance of each of the ranking criteria is difficult to establish, since each criteria score depends on many factors, such as the number of terms in the query, the type of sources that are indexed, the individual terms in the query, and the number of items in the index. ## Ranking example You perform the `Washing Machine` query on your appliance website, and two results are returned. To learn why the results are in that specific order, you inspect their relevance score in the Debug panel. You first take a look at the index ranking. The first result (`KleanKlothes Washing Machine`) has `Washing` and `Machine` in its title and contains several occurrences of `washing machine` in its content. Therefore, the index sets the result score at 5,000. The second result (`EZLaundry Machine`) has only `Machine` in its title, so the index gives the result a score of 3,000. You then analyze how the Coveo ML ART feature impacted the ranking. Since `EZLaundry Machine` is clicked more often than `KleanKlothes Washing Machine` and that users usually don't return to the search page to perform another query after consulting the product page, the ART model adds 2,500 to the score of `EZLaundry Machine`. So far, the score for `KleanKlothes Washing Machine` is 5,000 and 5,500 for `EZLaundry Machine`. Finally, you remember your marketing team had an incentive to promote `KleanKlothes Washing Machine`. The team created a query ranking expression that adds 1,000 points, pushing the `KleanKlothes Washing Machine` score to 6,000, which is higher than the `EZLaundry Machine` one at 5,500. Hence why `KleanKlothes Washing Machine` is the first returned result.