Class Similarity
- java.lang.Object
-
- org.apache.lucene.search.similarities.Similarity
-
- Direct Known Subclasses:
BM25Similarity
,BooleanSimilarity
,MultiSimilarity
,PerFieldSimilarityWrapper
,SimilarityBase
,TFIDFSimilarity
public abstract class Similarity extends Object
Similarity defines the components of Lucene scoring.Expert: Scoring API.
This is a low-level API, you should only extend this API if you want to implement an information retrieval model. If you are instead looking for a convenient way to alter Lucene's scoring, consider extending a higher-level implementation such as
TFIDFSimilarity
, which implements the vector space model with this API, or just tweaking the default implementation:BM25Similarity
.Similarity determines how Lucene weights terms, and Lucene interacts with this class at both index-time and query-time.
Indexing Time At indexing time, the indexer calls
computeNorm(FieldInvertState)
, allowing the Similarity implementation to set a per-document value for the field that will be later accessible viaLeafReader.getNormValues(String)
. Lucene makes no assumption about what is in this norm, but it is most useful for encoding length normalization information.Implementations should carefully consider how the normalization is encoded: while Lucene's
BM25Similarity
encodes a combination of index-time boost and length normalization information withSmallFloat
into a single byte, this might not be suitable for all purposes.Many formulas require the use of average document length, which can be computed via a combination of
CollectionStatistics.sumTotalTermFreq()
andCollectionStatistics.maxDoc()
orCollectionStatistics.docCount()
, depending upon whether the average should reflect field sparsity.Additional scoring factors can be stored in named
NumericDocValuesField
s and accessed at query-time withLeafReader.getNumericDocValues(String)
.Finally, using index-time boosts (either via folding into the normalization byte or via DocValues), is an inefficient way to boost the scores of different fields if the boost will be the same for every document, instead the Similarity can simply take a constant boost parameter C, and
PerFieldSimilarityWrapper
can return different instances with different boosts depending upon field name.Query time At query-time, Queries interact with the Similarity via these steps:
- The
computeWeight(float, CollectionStatistics, TermStatistics...)
method is called a single time, allowing the implementation to compute any statistics (such as IDF, average document length, etc) across the entire collection. TheTermStatistics
andCollectionStatistics
passed in already contain all of the raw statistics involved, so a Similarity can freely use any combination of statistics without causing any additional I/O. Lucene makes no assumption about what is stored in the returnedSimilarity.SimWeight
object. - For each segment in the index, the Query creates a
simScorer(SimWeight, org.apache.lucene.index.LeafReaderContext)
The score() method is called for each matching document.
Explanations When
IndexSearcher.explain(org.apache.lucene.search.Query, int)
is called, queries consult the Similarity's DocScorer for an explanation of how it computed its score. The query passes in a the document id and an explanation of how the frequency was computed.- See Also:
IndexWriterConfig.setSimilarity(Similarity)
,IndexSearcher.setSimilarity(Similarity)
- WARNING: This API is experimental and might change in incompatible ways in the next release.
-
-
Nested Class Summary
Nested Classes Modifier and Type Class Description static class
Similarity.SimScorer
static class
Similarity.SimWeight
Stores the weight for a query across the indexed collection.
-
Constructor Summary
Constructors Constructor Description Similarity()
Sole constructor.
-
Method Summary
All Methods Instance Methods Abstract Methods Modifier and Type Method Description abstract long
computeNorm(FieldInvertState state)
Computes the normalization value for a field, given the accumulated state of term processing for this field (seeFieldInvertState
).abstract Similarity.SimWeight
computeWeight(float boost, CollectionStatistics collectionStats, TermStatistics... termStats)
Compute any collection-level weight (e.g.abstract Similarity.SimScorer
simScorer(Similarity.SimWeight weight, LeafReaderContext context)
Creates a newSimilarity.SimScorer
to score matching documents from a segment of the inverted index.
-
-
-
Method Detail
-
computeNorm
public abstract long computeNorm(FieldInvertState state)
Computes the normalization value for a field, given the accumulated state of term processing for this field (seeFieldInvertState
).Matches in longer fields are less precise, so implementations of this method usually set smaller values when
state.getLength()
is large, and larger values whenstate.getLength()
is small.- Parameters:
state
- current processing state for this field- Returns:
- computed norm value
- WARNING: This API is experimental and might change in incompatible ways in the next release.
-
computeWeight
public abstract Similarity.SimWeight computeWeight(float boost, CollectionStatistics collectionStats, TermStatistics... termStats)
Compute any collection-level weight (e.g. IDF, average document length, etc) needed for scoring a query.- Parameters:
boost
- a multiplicative factor to apply to the produces scorescollectionStats
- collection-level statistics, such as the number of tokens in the collection.termStats
- term-level statistics, such as the document frequency of a term across the collection.- Returns:
- SimWeight object with the information this Similarity needs to score a query.
-
simScorer
public abstract Similarity.SimScorer simScorer(Similarity.SimWeight weight, LeafReaderContext context) throws IOException
Creates a newSimilarity.SimScorer
to score matching documents from a segment of the inverted index.- Parameters:
weight
- collection information fromcomputeWeight(float, CollectionStatistics, TermStatistics...)
context
- segment of the inverted index to be scored.- Returns:
- SloppySimScorer for scoring documents across
context
- Throws:
IOException
- if there is a low-level I/O error
-
-