- Direct Known Subclasses:
public abstract class Similarity extends ObjectSimilarity defines the components of Lucene scoring.
Expert: Scoring API.
This is a low-level API, you should only extend this API if you want to implement an information retrieval model. If you are instead looking for a convenient way to alter Lucene's scoring, consider just tweaking the default implementation:
SimilarityBase, which makes it easy to compute a score from index statistics.
Indexing Time At indexing time, the indexer calls
computeNorm(FieldInvertState), allowing the Similarity implementation to set a per-document value for the field that will be later accessible via
LeafReader.getNormValues(String). Lucene makes no assumption about what is in this norm, but it is most useful for encoding length normalization information.
Implementations should carefully consider how the normalization is encoded: while Lucene's
BM25Similarityencodes length normalization information with
SmallFloatinto a single byte, this might not be suitable for all purposes.
Additional scoring factors can be stored in named
NumericDocValuesFields and accessed at query-time with
LeafReader.getNumericDocValues(String). However this should not be done in the
Similaritybut externally, for instance by using
Finally, using index-time boosts (either via folding into the normalization byte or via DocValues), is an inefficient way to boost the scores of different fields if the boost will be the same for every document, instead the Similarity can simply take a constant boost parameter C, and
PerFieldSimilarityWrappercan return different instances with different boosts depending upon field name.
Query time At query-time, Queries interact with the Similarity via these steps:
scorer(float, CollectionStatistics, TermStatistics...)method is called a single time, allowing the implementation to compute any statistics (such as IDF, average document length, etc) across the entire collection. The
CollectionStatisticspassed in already contain all of the raw statistics involved, so a Similarity can freely use any combination of statistics without causing any additional I/O. Lucene makes no assumption about what is stored in the returned
Similarity.SimScorer.score(float, long)is called for every matching document to compute its score.
IndexSearcher.explain(org.apache.lucene.search.Query, int)is called, queries consult the Similarity's DocScorer for an explanation of how it computed its score. The query passes in a the document id and an explanation of how the frequency was computed.
Nested Class Summary
Nested Classes Modifier and Type Class Description
Similarity.SimScorerStores the weight for a query across the indexed collection.
Constructors Modifier Constructor Description
All Methods Instance Methods Abstract Methods Modifier and Type Method Description
computeNorm(FieldInvertState state)Computes the normalization value for a field, given the accumulated state of term processing for this field (see
scorer(float boost, CollectionStatistics collectionStats, TermStatistics... termStats)Compute any collection-level weight (e.g.
public abstract long computeNorm(FieldInvertState state)Computes the normalization value for a field, given the accumulated state of term processing for this field (see
Matches in longer fields are less precise, so implementations of this method usually set smaller values when
state.getLength()is large, and larger values when
Note that for a given term-document frequency, greater unsigned norms must produce scores that are lower or equal, ie. for two encoded norms
Long.compareUnsigned(n1, n2) > 0then
SimScorer.score(freq, n1) <= SimScorer.score(freq, n2)for any legal
0is not a legal norm, so
1is the norm that produces the highest scores.
state- current processing state for this field
- computed norm value
- WARNING: This API is experimental and might change in incompatible ways in the next release.
public abstract Similarity.SimScorer scorer(float boost, CollectionStatistics collectionStats, TermStatistics... termStats)Compute any collection-level weight (e.g. IDF, average document length, etc) needed for scoring a query.
boost- a multiplicative factor to apply to the produces scores
collectionStats- collection-level statistics, such as the number of tokens in the collection.
termStats- term-level statistics, such as the document frequency of a term across the collection.
- SimWeight object with the information this Similarity needs to score a query.