Class Similarity
- java.lang.Object
-
- org.apache.lucene.search.similarities.Similarity
-
- Direct Known Subclasses:
BM25Similarity
,BooleanSimilarity
,MultiSimilarity
,PerFieldSimilarityWrapper
,SimilarityBase
,TFIDFSimilarity
public abstract class Similarity extends Object
Similarity defines the components of Lucene scoring.Expert: Scoring API.
This is a low-level API, you should only extend this API if you want to implement an information retrieval model. If you are instead looking for a convenient way to alter Lucene's scoring, consider just tweaking the default implementation:
BM25Similarity
or extendSimilarityBase
, which makes it easy to compute a score from index statistics.Similarity determines how Lucene weights terms, and Lucene interacts with this class at both index-time and query-time.
Indexing Time At indexing time, the indexer calls
computeNorm(FieldInvertState)
, allowing the Similarity implementation to set a per-document value for the field that will be later accessible viaLeafReader.getNormValues(String)
. Lucene makes no assumption about what is in this norm, but it is most useful for encoding length normalization information.Implementations should carefully consider how the normalization is encoded: while Lucene's
BM25Similarity
encodes length normalization information withSmallFloat
into a single byte, this might not be suitable for all purposes.Many formulas require the use of average document length, which can be computed via a combination of
CollectionStatistics.sumTotalTermFreq()
andCollectionStatistics.docCount()
.Additional scoring factors can be stored in named
NumericDocValuesField
s and accessed at query-time withLeafReader.getNumericDocValues(String)
. However this should not be done in theSimilarity
but externally, for instance by usingFunctionScoreQuery
.Finally, using index-time boosts (either via folding into the normalization byte or via DocValues), is an inefficient way to boost the scores of different fields if the boost will be the same for every document, instead the Similarity can simply take a constant boost parameter C, and
PerFieldSimilarityWrapper
can return different instances with different boosts depending upon field name.Query time At query-time, Queries interact with the Similarity via these steps:
- The
scorer(float, CollectionStatistics, TermStatistics...)
method is called a single time, allowing the implementation to compute any statistics (such as IDF, average document length, etc) across the entire collection. TheTermStatistics
andCollectionStatistics
passed in already contain all of the raw statistics involved, so a Similarity can freely use any combination of statistics without causing any additional I/O. Lucene makes no assumption about what is stored in the returnedSimilarity.SimScorer
object. - Then
Similarity.SimScorer.score(float, long)
is called for every matching document to compute its score.
Explanations When
IndexSearcher.explain(org.apache.lucene.search.Query, int)
is called, queries consult the Similarity's DocScorer for an explanation of how it computed its score. The query passes in a the document id and an explanation of how the frequency was computed.- See Also:
IndexWriterConfig.setSimilarity(Similarity)
,IndexSearcher.setSimilarity(Similarity)
- WARNING: This API is experimental and might change in incompatible ways in the next release.
-
-
Nested Class Summary
Nested Classes Modifier and Type Class Description static class
Similarity.SimScorer
Stores the weight for a query across the indexed collection.
-
Constructor Summary
Constructors Modifier Constructor Description protected
Similarity()
Sole constructor.
-
Method Summary
All Methods Instance Methods Abstract Methods Modifier and Type Method Description abstract long
computeNorm(FieldInvertState state)
Computes the normalization value for a field, given the accumulated state of term processing for this field (seeFieldInvertState
).abstract Similarity.SimScorer
scorer(float boost, CollectionStatistics collectionStats, TermStatistics... termStats)
Compute any collection-level weight (e.g.
-
-
-
Method Detail
-
computeNorm
public abstract long computeNorm(FieldInvertState state)
Computes the normalization value for a field, given the accumulated state of term processing for this field (seeFieldInvertState
).Matches in longer fields are less precise, so implementations of this method usually set smaller values when
state.getLength()
is large, and larger values whenstate.getLength()
is small.Note that for a given term-document frequency, greater unsigned norms must produce scores that are lower or equal, ie. for two encoded norms
n1
andn2
so thatLong.compareUnsigned(n1, n2) > 0
thenSimScorer.score(freq, n1) <= SimScorer.score(freq, n2)
for any legalfreq
.0
is not a legal norm, so1
is the norm that produces the highest scores.- Parameters:
state
- current processing state for this field- Returns:
- computed norm value
- WARNING: This API is experimental and might change in incompatible ways in the next release.
-
scorer
public abstract Similarity.SimScorer scorer(float boost, CollectionStatistics collectionStats, TermStatistics... termStats)
Compute any collection-level weight (e.g. IDF, average document length, etc) needed for scoring a query.- Parameters:
boost
- a multiplicative factor to apply to the produces scorescollectionStats
- collection-level statistics, such as the number of tokens in the collection.termStats
- term-level statistics, such as the document frequency of a term across the collection.- Returns:
- SimWeight object with the information this Similarity needs to score a query.
-
-