public abstract class Similarity extends Object
Expert: Scoring API.
This is a low-level API, you should only extend this API if you want to implement
an information retrieval model. If you are instead looking for a convenient way
to alter Lucene's scoring, consider extending a higher-level implementation
such as TFIDFSimilarity
, which implements the vector space model with this API, or
just tweaking the default implementation: DefaultSimilarity
.
Similarity determines how Lucene weights terms, and Lucene interacts with this class at both index-time and query-time.
At indexing time, the indexer calls computeNorm(FieldInvertState, Norm)
, allowing
the Similarity implementation to set a per-document value for the field that will
be later accessible via AtomicReader.normValues(String)
. Lucene makes no assumption
about what is in this norm, but it is most useful for encoding length normalization
information.
Implementations should carefully consider how the normalization is encoded: while
Lucene's classical TFIDFSimilarity
encodes a combination of index-time boost
and length normalization information with SmallFloat
into a single byte, this
might not be suitable for all purposes.
Many formulas require the use of average document length, which can be computed via a
combination of CollectionStatistics.sumTotalTermFreq()
and
CollectionStatistics.maxDoc()
or CollectionStatistics.docCount()
,
depending upon whether the average should reflect field sparsity.
Additional scoring factors can be stored in named
*DocValuesField
s (such as ByteDocValuesField
or FloatDocValuesField
), and accessed
at query-time with AtomicReader.docValues(String)
.
Finally, using index-time boosts (either via folding into the normalization byte or
via DocValues), is an inefficient way to boost the scores of different fields if the
boost will be the same for every document, instead the Similarity can simply take a constant
boost parameter C, and PerFieldSimilarityWrapper
can return different
instances with different boosts depending upon field name.
At query-time, Queries interact with the Similarity via these steps:
computeWeight(float, CollectionStatistics, TermStatistics...)
method is called a single time,
allowing the implementation to compute any statistics (such as IDF, average document length, etc)
across the entire collection. The TermStatistics
and CollectionStatistics
passed in
already contain all of the raw statistics involved, so a Similarity can freely use any combination
of statistics without causing any additional I/O. Lucene makes no assumption about what is
stored in the returned Similarity.SimWeight
object.
Similarity.SimWeight.getValueForNormalization()
is called for each query leaf node, queryNorm(float)
is called for the top-level
query, and finally Similarity.SimWeight.normalize(float, float)
passes down the normalization value
and any top-level boosts (e.g. from enclosing BooleanQuery
s).
exactSimScorer(SimWeight, AtomicReaderContext)
(for queries with exact frequencies such as TermQuerys and exact PhraseQueries) or a
sloppySimScorer(SimWeight, AtomicReaderContext)
(for queries with sloppy frequencies such as
SpanQuerys and sloppy PhraseQueries). The score() method is called for each matching document.
When IndexSearcher.explain(org.apache.lucene.search.Query, int)
is called, queries consult the Similarity's DocScorer for an
explanation of how it computed its score. The query passes in a the document id and an explanation of how the frequency
was computed.
IndexWriterConfig.setSimilarity(Similarity)
,
IndexSearcher.setSimilarity(Similarity)
Modifier and Type | Class and Description |
---|---|
static class |
Similarity.ExactSimScorer
API for scoring exact queries such as
TermQuery and
exact PhraseQuery . |
static class |
Similarity.SimWeight
Stores the weight for a query across the indexed collection.
|
static class |
Similarity.SloppySimScorer
API for scoring "sloppy" queries such as
SpanQuery and
sloppy PhraseQuery . |
Constructor and Description |
---|
Similarity()
Sole constructor.
|
Modifier and Type | Method and Description |
---|---|
abstract void |
computeNorm(FieldInvertState state,
Norm norm)
Computes the normalization value for a field, given the accumulated
state of term processing for this field (see
FieldInvertState ). |
abstract Similarity.SimWeight |
computeWeight(float queryBoost,
CollectionStatistics collectionStats,
TermStatistics... termStats)
Compute any collection-level weight (e.g.
|
float |
coord(int overlap,
int maxOverlap)
Hook to integrate coordinate-level matching.
|
abstract Similarity.ExactSimScorer |
exactSimScorer(Similarity.SimWeight weight,
AtomicReaderContext context)
Creates a new
Similarity.ExactSimScorer to score matching documents from a segment of the inverted index. |
float |
queryNorm(float valueForNormalization)
Computes the normalization value for a query given the sum of the
normalized weights
Similarity.SimWeight.getValueForNormalization() of
each of the query terms. |
abstract Similarity.SloppySimScorer |
sloppySimScorer(Similarity.SimWeight weight,
AtomicReaderContext context)
Creates a new
Similarity.SloppySimScorer to score matching documents from a segment of the inverted index. |
public Similarity()
public float coord(int overlap, int maxOverlap)
By default this is disabled (returns 1
), as with
most modern models this will only skew performance, but some
implementations such as TFIDFSimilarity
override this.
overlap
- the number of query terms matched in the documentmaxOverlap
- the total number of terms in the querypublic float queryNorm(float valueForNormalization)
Similarity.SimWeight.getValueForNormalization()
of
each of the query terms. This value is passed back to the
weight (Similarity.SimWeight.normalize(float, float)
of each query
term, to provide a hook to attempt to make scores from different
queries comparable.
By default this is disabled (returns 1
), but some
implementations such as TFIDFSimilarity
override this.
valueForNormalization
- the sum of the term normalization valuespublic abstract void computeNorm(FieldInvertState state, Norm norm)
FieldInvertState
).
Implementations should calculate a norm value based on the field
state and set that value to the given Norm
.
Matches in longer fields are less precise, so implementations of this
method usually set smaller values when state.getLength()
is large,
and larger values when state.getLength()
is small.
state
- current processing state for this fieldnorm
- holds the computed norm value when this method returnspublic abstract Similarity.SimWeight computeWeight(float queryBoost, CollectionStatistics collectionStats, TermStatistics... termStats)
queryBoost
- the query-time boost.collectionStats
- collection-level statistics, such as the number of tokens in the collection.termStats
- term-level statistics, such as the document frequency of a term across the collection.public abstract Similarity.ExactSimScorer exactSimScorer(Similarity.SimWeight weight, AtomicReaderContext context) throws IOException
Similarity.ExactSimScorer
to score matching documents from a segment of the inverted index.weight
- collection information from computeWeight(float, CollectionStatistics, TermStatistics...)
context
- segment of the inverted index to be scored.context
IOException
- if there is a low-level I/O errorpublic abstract Similarity.SloppySimScorer sloppySimScorer(Similarity.SimWeight weight, AtomicReaderContext context) throws IOException
Similarity.SloppySimScorer
to score matching documents from a segment of the inverted index.weight
- collection information from computeWeight(float, CollectionStatistics, TermStatistics...)
context
- segment of the inverted index to be scored.context
IOException
- if there is a low-level I/O errorCopyright © 2000-2012 Apache Software Foundation. All Rights Reserved.