public abstract class Similarity extends Object
Expert: Scoring API.
This is a low-level API, you should only extend this API if you want to implement
an information retrieval model. If you are instead looking for a convenient way
to alter Lucene's scoring, consider extending a higher-level implementation
TFIDFSimilarity, which implements the vector space model with this API, or
just tweaking the default implementation:
At indexing time, the indexer calls
the Similarity implementation to set a per-document value for the field that will
be later accessible via
LeafReader.getNormValues(String). Lucene makes no assumption
about what is in this norm, but it is most useful for encoding length normalization
Implementations should carefully consider how the normalization is encoded: while
BM25Similarity encodes a combination of index-time boost
and length normalization information with
SmallFloat into a single byte, this
might not be suitable for all purposes.
Many formulas require the use of average document length, which can be computed via a
depending upon whether the average should reflect field sparsity.
Additional scoring factors can be stored in named
NumericDocValuesFields and accessed
at query-time with
Finally, using index-time boosts (either via folding into the normalization byte or
via DocValues), is an inefficient way to boost the scores of different fields if the
boost will be the same for every document, instead the Similarity can simply take a constant
boost parameter C, and
PerFieldSimilarityWrapper can return different
instances with different boosts depending upon field name.
Query time At query-time, Queries interact with the Similarity via these steps:
computeWeight(float, CollectionStatistics, TermStatistics...)method is called a single time, allowing the implementation to compute any statistics (such as IDF, average document length, etc) across the entire collection. The
CollectionStatisticspassed in already contain all of the raw statistics involved, so a Similarity can freely use any combination of statistics without causing any additional I/O. Lucene makes no assumption about what is stored in the returned
simScorer(SimWeight, org.apache.lucene.index.LeafReaderContext)The score() method is called for each matching document.
IndexSearcher.explain(org.apache.lucene.search.Query, int) is called, queries consult the Similarity's DocScorer for an
explanation of how it computed its score. The query passes in a the document id and an explanation of how the frequency
|Modifier and Type||Class and Description|
Stores the weight for a query across the indexed collection.
|Constructor and Description|
|Modifier and Type||Method and Description|
Computes the normalization value for a field, given the accumulated state of term processing for this field (see
Compute any collection-level weight (e.g.
Creates a new
public abstract long computeNorm(FieldInvertState state)
Matches in longer fields are less precise, so implementations of this
method usually set smaller values when
state.getLength() is large,
and larger values when
state.getLength() is small.
state- current processing state for this field
public abstract Similarity.SimWeight computeWeight(float boost, CollectionStatistics collectionStats, TermStatistics... termStats)
boost- a multiplicative factor to apply to the produces scores
collectionStats- collection-level statistics, such as the number of tokens in the collection.
termStats- term-level statistics, such as the document frequency of a term across the collection.
public abstract Similarity.SimScorer simScorer(Similarity.SimWeight weight, LeafReaderContext context) throws IOException
Similarity.SimScorerto score matching documents from a segment of the inverted index.
weight- collection information from
computeWeight(float, CollectionStatistics, TermStatistics...)
context- segment of the inverted index to be scored.
IOException- if there is a low-level I/O error
Copyright © 2000-2019 Apache Software Foundation. All Rights Reserved.