README.md

 1
 2# Semantic Index
 3
 4## Evaluation
 5
 6### Metrics
 7
 8nDCG@k:
 9- "The value of NDCG is determined by comparing the relevance of the items returned by the search engine to the relevance of the item that a hypothetical "ideal" search engine would return.
10- "The relevance of result is represented by a score (also known as a 'grade') that is assigned to the search query. The scores of these results are then discounted based on their position in the search results -- did they get recommended first or last?"
11
12MRR@k:
13- "Mean reciprocal rank quantifies the rank of the first relevant item found in the recommendation list."
14
15MAP@k:
16- "Mean average precision averages the precision@k metric at each relevant item position in the recommendation list.
17
18Resources:
19- [Evaluating recommendation metrics](https://www.shaped.ai/blog/evaluating-recommendation-systems-map-mmr-ndcg)
20- [Math Walkthrough](https://towardsdatascience.com/demystifying-ndcg-bee3be58cfe0)