public class BM25Similarity extends Similarity
Similarity.ExactSimScorer, Similarity.SimWeight, Similarity.SloppySimScorer
Modifier and Type | Field and Description |
---|---|
protected boolean |
discountOverlaps
True if overlap tokens (tokens with a position of increment of zero) are
discounted from the document's length.
|
Constructor and Description |
---|
BM25Similarity()
BM25 with these default values:
k1 = 1.2 ,
b = 0.75 .
|
BM25Similarity(float k1,
float b)
BM25 with the supplied parameter values.
|
Modifier and Type | Method and Description |
---|---|
protected float |
avgFieldLength(CollectionStatistics collectionStats)
The default implementation computes the average as
sumTotalTermFreq / maxDoc ,
or returns 1 if the index does not store sumTotalTermFreq (Lucene 3.x indexes
or any field that omits frequency information). |
void |
computeNorm(FieldInvertState state,
Norm norm)
Computes the normalization value for a field, given the accumulated
state of term processing for this field (see
FieldInvertState ). |
Similarity.SimWeight |
computeWeight(float queryBoost,
CollectionStatistics collectionStats,
TermStatistics... termStats)
Compute any collection-level weight (e.g.
|
protected float |
decodeNormValue(byte b)
|
protected byte |
encodeNormValue(float boost,
int fieldLength)
The default implementation encodes
boost / sqrt(length)
with SmallFloat.floatToByte315(float) . |
Similarity.ExactSimScorer |
exactSimScorer(Similarity.SimWeight stats,
AtomicReaderContext context)
Creates a new
Similarity.ExactSimScorer to score matching documents from a segment of the inverted index. |
float |
getB()
Returns the
b parameter |
boolean |
getDiscountOverlaps()
Returns true if overlap tokens are discounted from the document's length.
|
float |
getK1()
Returns the
k1 parameter |
protected float |
idf(long docFreq,
long numDocs)
Implemented as
log(1 + (numDocs - docFreq + 0.5)/(docFreq + 0.5)) . |
Explanation |
idfExplain(CollectionStatistics collectionStats,
TermStatistics termStats)
Computes a score factor for a simple term and returns an explanation
for that score factor.
|
Explanation |
idfExplain(CollectionStatistics collectionStats,
TermStatistics[] termStats)
Computes a score factor for a phrase.
|
protected float |
scorePayload(int doc,
int start,
int end,
BytesRef payload)
The default implementation returns
1 |
void |
setDiscountOverlaps(boolean v)
Sets whether overlap tokens (Tokens with 0 position increment) are
ignored when computing norm.
|
protected float |
sloppyFreq(int distance)
Implemented as
1 / (distance + 1) . |
Similarity.SloppySimScorer |
sloppySimScorer(Similarity.SimWeight stats,
AtomicReaderContext context)
Creates a new
Similarity.SloppySimScorer to score matching documents from a segment of the inverted index. |
String |
toString() |
coord, queryNorm
protected boolean discountOverlaps
public BM25Similarity(float k1, float b)
k1
- Controls non-linear term frequency normalization (saturation).b
- Controls to what degree document length normalizes tf values.public BM25Similarity()
k1 = 1.2
,
b = 0.75
.protected float idf(long docFreq, long numDocs)
log(1 + (numDocs - docFreq + 0.5)/(docFreq + 0.5))
.protected float sloppyFreq(int distance)
1 / (distance + 1)
.protected float scorePayload(int doc, int start, int end, BytesRef payload)
1
protected float avgFieldLength(CollectionStatistics collectionStats)
sumTotalTermFreq / maxDoc
,
or returns 1
if the index does not store sumTotalTermFreq (Lucene 3.x indexes
or any field that omits frequency information).protected byte encodeNormValue(float boost, int fieldLength)
boost / sqrt(length)
with SmallFloat.floatToByte315(float)
. This is compatible with
Lucene's default implementation. If you change this, then you should
change decodeNormValue(byte)
to match.protected float decodeNormValue(byte b)
public void setDiscountOverlaps(boolean v)
public boolean getDiscountOverlaps()
setDiscountOverlaps(boolean)
public final void computeNorm(FieldInvertState state, Norm norm)
Similarity
FieldInvertState
).
Implementations should calculate a norm value based on the field
state and set that value to the given Norm
.
Matches in longer fields are less precise, so implementations of this
method usually set smaller values when state.getLength()
is large,
and larger values when state.getLength()
is small.
computeNorm
in class Similarity
state
- current processing state for this fieldnorm
- holds the computed norm value when this method returnspublic Explanation idfExplain(CollectionStatistics collectionStats, TermStatistics termStats)
The default implementation uses:
idf(docFreq, searcher.maxDoc());Note that
CollectionStatistics.maxDoc()
is used instead of
IndexReader#numDocs()
because also
TermStatistics.docFreq()
is used, and when the latter
is inaccurate, so is CollectionStatistics.maxDoc()
, and in the same direction.
In addition, CollectionStatistics.maxDoc()
is more efficient to computecollectionStats
- collection-level statisticstermStats
- term-level statistics for the termpublic Explanation idfExplain(CollectionStatistics collectionStats, TermStatistics[] termStats)
The default implementation sums the idf factor for each term in the phrase.
collectionStats
- collection-level statisticstermStats
- term-level statistics for the terms in the phrasepublic final Similarity.SimWeight computeWeight(float queryBoost, CollectionStatistics collectionStats, TermStatistics... termStats)
Similarity
computeWeight
in class Similarity
queryBoost
- the query-time boost.collectionStats
- collection-level statistics, such as the number of tokens in the collection.termStats
- term-level statistics, such as the document frequency of a term across the collection.public final Similarity.ExactSimScorer exactSimScorer(Similarity.SimWeight stats, AtomicReaderContext context) throws IOException
Similarity
Similarity.ExactSimScorer
to score matching documents from a segment of the inverted index.exactSimScorer
in class Similarity
stats
- collection information from Similarity.computeWeight(float, CollectionStatistics, TermStatistics...)
context
- segment of the inverted index to be scored.context
IOException
- if there is a low-level I/O errorpublic final Similarity.SloppySimScorer sloppySimScorer(Similarity.SimWeight stats, AtomicReaderContext context) throws IOException
Similarity
Similarity.SloppySimScorer
to score matching documents from a segment of the inverted index.sloppySimScorer
in class Similarity
stats
- collection information from Similarity.computeWeight(float, CollectionStatistics, TermStatistics...)
context
- segment of the inverted index to be scored.context
IOException
- if there is a low-level I/O errorpublic float getK1()
k1
parameterBM25Similarity(float, float)
public float getB()
b
parameterBM25Similarity(float, float)
Copyright © 2000-2012 Apache Software Foundation. All Rights Reserved.