Evaluate generated answers and improve your RGA implementation

Important

The Coveo Knowledge Hub and Answer Manager are currently available as a beta to early-access customers only. Contact your Customer Success Manager for early access to this feature.

Evaluating the answers that are generated by Relevance Generative Answering (RGA) is a crucial part of your RGA implementation, just as with any generative answering system.

At a basic level, RGA relies on data and algorithms to generate answers. Evaluating the answers that are generated for specific user queries helps you assess the effectiveness of your RGA implementation in relation to your dataset. Evaluating the generated answers is an important step when testing your RGA implementation in a pre-production environment. Post-production evaluations are equally valuable, as they help you fine-tune and improve your implementation as your dataset evolves.

Use the Coveo Knowledge Hub to evaluate your generated answers and improve your RGA implementation. The Knowledge Hub offers tools that allow you to view answer feedback, inspect the segments of text (chunks) used to generate an answer, and create rules to manage your generated answers.

Besides detailing how to effectively use the Knowledge Hub tools, the Knowledge Hub documentation also provides a recommended workflow for improving your RGA implementation based on answer evaluation.