public class BM25Similarity extends Similarity
Similarity.SimScorer, Similarity.SimWeight
Modifier and Type | Field and Description |
---|---|
protected boolean |
discountOverlaps
True if overlap tokens (tokens with a position of increment of zero) are
discounted from the document's length.
|
Constructor and Description |
---|
BM25Similarity()
BM25 with these default values:
k1 = 1.2
b = 0.75
|
BM25Similarity(float k1,
float b)
BM25 with the supplied parameter values.
|
Modifier and Type | Method and Description |
---|---|
protected float |
avgFieldLength(CollectionStatistics collectionStats)
The default implementation computes the average as
sumTotalTermFreq / docCount ,
or returns 1 if the index does not store sumTotalTermFreq:
any field that omits frequency information). |
long |
computeNorm(FieldInvertState state)
Computes the normalization value for a field, given the accumulated
state of term processing for this field (see
FieldInvertState ). |
Similarity.SimWeight |
computeWeight(CollectionStatistics collectionStats,
TermStatistics... termStats)
Compute any collection-level weight (e.g.
|
protected float |
decodeNormValue(byte b)
|
protected byte |
encodeNormValue(float boost,
int fieldLength)
The default implementation encodes
boost / sqrt(length)
with SmallFloat.floatToByte315(float) . |
float |
getB()
Returns the
b parameter |
boolean |
getDiscountOverlaps()
Returns true if overlap tokens are discounted from the document's length.
|
float |
getK1()
Returns the
k1 parameter |
protected float |
idf(long docFreq,
long docCount)
Implemented as
log(1 + (docCount - docFreq + 0.5)/(docFreq + 0.5)) . |
Explanation |
idfExplain(CollectionStatistics collectionStats,
TermStatistics termStats)
Computes a score factor for a simple term and returns an explanation
for that score factor.
|
Explanation |
idfExplain(CollectionStatistics collectionStats,
TermStatistics[] termStats)
Computes a score factor for a phrase.
|
protected float |
scorePayload(int doc,
int start,
int end,
BytesRef payload)
The default implementation returns
1 |
void |
setDiscountOverlaps(boolean v)
Sets whether overlap tokens (Tokens with 0 position increment) are
ignored when computing norm.
|
Similarity.SimScorer |
simScorer(Similarity.SimWeight stats,
LeafReaderContext context)
Creates a new
Similarity.SimScorer to score matching documents from a segment of the inverted index. |
protected float |
sloppyFreq(int distance)
Implemented as
1 / (distance + 1) . |
String |
toString() |
coord, queryNorm
protected boolean discountOverlaps
public BM25Similarity(float k1, float b)
k1
- Controls non-linear term frequency normalization (saturation).b
- Controls to what degree document length normalizes tf values.IllegalArgumentException
- if k1
is infinite or negative, or if b
is
not within the range [0..1]
public BM25Similarity()
k1 = 1.2
b = 0.75
protected float idf(long docFreq, long docCount)
log(1 + (docCount - docFreq + 0.5)/(docFreq + 0.5))
.protected float sloppyFreq(int distance)
1 / (distance + 1)
.protected float scorePayload(int doc, int start, int end, BytesRef payload)
1
protected float avgFieldLength(CollectionStatistics collectionStats)
sumTotalTermFreq / docCount
,
or returns 1
if the index does not store sumTotalTermFreq:
any field that omits frequency information).protected byte encodeNormValue(float boost, int fieldLength)
boost / sqrt(length)
with SmallFloat.floatToByte315(float)
. This is compatible with
Lucene's default implementation. If you change this, then you should
change decodeNormValue(byte)
to match.protected float decodeNormValue(byte b)
public void setDiscountOverlaps(boolean v)
public boolean getDiscountOverlaps()
setDiscountOverlaps(boolean)
public final long computeNorm(FieldInvertState state)
Similarity
FieldInvertState
).
Matches in longer fields are less precise, so implementations of this
method usually set smaller values when state.getLength()
is large,
and larger values when state.getLength()
is small.
computeNorm
in class Similarity
state
- current processing state for this fieldpublic Explanation idfExplain(CollectionStatistics collectionStats, TermStatistics termStats)
The default implementation uses:
idf(docFreq, docCount);Note that
CollectionStatistics.docCount()
is used instead of
IndexReader#numDocs()
because also
TermStatistics.docFreq()
is used, and when the latter
is inaccurate, so is CollectionStatistics.docCount()
, and in the same direction.
In addition, CollectionStatistics.docCount()
does not skew when fields are sparse.collectionStats
- collection-level statisticstermStats
- term-level statistics for the termpublic Explanation idfExplain(CollectionStatistics collectionStats, TermStatistics[] termStats)
The default implementation sums the idf factor for each term in the phrase.
collectionStats
- collection-level statisticstermStats
- term-level statistics for the terms in the phrasepublic final Similarity.SimWeight computeWeight(CollectionStatistics collectionStats, TermStatistics... termStats)
Similarity
computeWeight
in class Similarity
collectionStats
- collection-level statistics, such as the number of tokens in the collection.termStats
- term-level statistics, such as the document frequency of a term across the collection.public final Similarity.SimScorer simScorer(Similarity.SimWeight stats, LeafReaderContext context) throws IOException
Similarity
Similarity.SimScorer
to score matching documents from a segment of the inverted index.simScorer
in class Similarity
stats
- collection information from Similarity.computeWeight(CollectionStatistics, TermStatistics...)
context
- segment of the inverted index to be scored.context
IOException
- if there is a low-level I/O errorpublic final float getK1()
k1
parameterBM25Similarity(float, float)
public final float getB()
b
parameterBM25Similarity(float, float)
Copyright © 2000-2016 Apache Software Foundation. All Rights Reserved.