Class Similarity

java.lang.Object
org.apache.lucene.search.similarities.Similarity
Direct Known Subclasses:
BM25Similarity, BooleanSimilarity, MultiSimilarity, PerFieldSimilarityWrapper, RawTFSimilarity, SimilarityBase, TFIDFSimilarity

public abstract class Similarity extends Object
Similarity defines the components of Lucene scoring.

Expert: Scoring API.

This is a low-level API, you should only extend this API if you want to implement an information retrieval model. If you are instead looking for a convenient way to alter Lucene's scoring, consider just tweaking the default implementation: BM25Similarity or extend SimilarityBase, which makes it easy to compute a score from index statistics.

Similarity determines how Lucene weights terms, and Lucene interacts with this class at both index-time and query-time.

Indexing Time At indexing time, the indexer calls computeNorm(FieldInvertState), allowing the Similarity implementation to set a per-document value for the field that will be later accessible via LeafReader.getNormValues(String). Lucene makes no assumption about what is in this norm, but it is most useful for encoding length normalization information.

Implementations should carefully consider how the normalization is encoded: while Lucene's default implementation encodes length normalization information with SmallFloat into a single byte, this might not be suitable for all purposes.

Many formulas require the use of average document length, which can be computed via a combination of CollectionStatistics.sumTotalTermFreq() and CollectionStatistics.docCount().

Additional scoring factors can be stored in named NumericDocValuesFields and accessed at query-time with LeafReader.getNumericDocValues(String). However this should not be done in the Similarity but externally, for instance by using FunctionScoreQuery.

Finally, using index-time boosts (either via folding into the normalization byte or via DocValues), is an inefficient way to boost the scores of different fields if the boost will be the same for every document, instead the Similarity can simply take a constant boost parameter C, and PerFieldSimilarityWrapper can return different instances with different boosts depending upon field name.

Query time At query-time, Queries interact with the Similarity via these steps:

  1. The scorer(float, CollectionStatistics, TermStatistics...) method is called a single time, allowing the implementation to compute any statistics (such as IDF, average document length, etc) across the entire collection. The TermStatistics and CollectionStatistics passed in already contain all of the raw statistics involved, so a Similarity can freely use any combination of statistics without causing any additional I/O. Lucene makes no assumption about what is stored in the returned Similarity.SimScorer object.
  2. Then Similarity.SimScorer.score(float, long) is called for every matching document to compute its score.

Explanations When IndexSearcher.explain(org.apache.lucene.search.Query, int) is called, queries consult the Similarity's DocScorer for an explanation of how it computed its score. The query passes in a the document id and an explanation of how the frequency was computed.

See Also:
WARNING: This API is experimental and might change in incompatible ways in the next release.
  • Constructor Details

    • Similarity

      protected Similarity()
      Default constructor. (For invocation by subclass constructors, typically implicit.)
    • Similarity

      protected Similarity(boolean discountOverlaps)
      Expert constructor that allows adjustment of getDiscountOverlaps() at index-time.

      Overlap tokens are tokens such as synonyms, that have a PositionIncrementAttribute of zero from the analysis chain.

      NOTE: If you modify this parameter, you'll need to re-index for it to take effect.

      Parameters:
      discountOverlaps - true if overlap tokens should not impact document length for scoring.
  • Method Details

    • getDiscountOverlaps

      public final boolean getDiscountOverlaps()
      Returns true if overlap tokens are discounted from the document's length.
      See Also:
    • computeNorm

      public long computeNorm(FieldInvertState state)
      Computes the normalization value for a field at index-time.

      The default implementation uses SmallFloat.intToByte4(int) to encode the number of terms as a single byte.

      WARNING: The default implementation is used by Lucene's supplied Similarity classes, which means you can change the Similarity at runtime without reindexing. If you override this method, you'll need to re-index documents for it to take effect.

      Matches in longer fields are less precise, so implementations of this method usually set smaller values when state.getLength() is large, and larger values when state.getLength() is small.

      Note that for a given term-document frequency, greater unsigned norms must produce scores that are lower or equal, ie. for two encoded norms n1 and n2 so that Long.compareUnsigned(n1, n2) > 0 then SimScorer.score(freq, n1) <= SimScorer.score(freq, n2) for any legal freq.

      0 is not a legal norm, so 1 is the norm that produces the highest scores.

      Parameters:
      state - accumulated state of term processing for this field
      Returns:
      computed norm value
      WARNING: This API is experimental and might change in incompatible ways in the next release.
    • scorer

      public abstract Similarity.SimScorer scorer(float boost, CollectionStatistics collectionStats, TermStatistics... termStats)
      Compute any collection-level weight (e.g. IDF, average document length, etc) needed for scoring a query.
      Parameters:
      boost - a multiplicative factor to apply to the produces scores
      collectionStats - collection-level statistics, such as the number of tokens in the collection.
      termStats - term-level statistics, such as the document frequency of a term across the collection.
      Returns:
      SimWeight object with the information this Similarity needs to score a query.