

PREV CLASS NEXT CLASS  FRAMES NO FRAMES  
SUMMARY: NESTED  FIELD  CONSTR  METHOD  DETAIL: FIELD  CONSTR  METHOD 
java.lang.Object org.apache.lucene.search.similarities.Similarity org.apache.lucene.search.similarities.TFIDFSimilarity
public abstract class TFIDFSimilarity
Implementation of Similarity
with the Vector Space Model.
Expert: Scoring API.
TFIDFSimilarity defines the components of Lucene scoring. Overriding computation of these components is a convenient way to alter Lucene scoring.
Suggested reading: Introduction To Information Retrieval, Chapter 6.
The following describes how Lucene scoring evolves from underlying information retrieval models to (efficient) implementation. We first brief on VSM Score, then derive from it Lucene's Conceptual Scoring Formula, from which, finally, evolves Lucene's Practical Scoring Function (the latter is connected directly with Lucene classes and methods).
Lucene combines Boolean model (BM) of Information Retrieval with Vector Space Model (VSM) of Information Retrieval  documents "approved" by BM are scored by VSM.
In VSM, documents and queries are represented as weighted vectors in a multidimensional space, where each distinct index term is a dimension, and weights are Tfidf values.
VSM does not require weights to be Tfidf values, but Tfidf values are believed to produce search results of high quality, and so Lucene is using Tfidf. Tf and Idf are described in more detail below, but for now, for completion, let's just say that for given term t and document (or query) x, Tf(t,x) varies with the number of occurrences of term t in x (when one increases so does the other) and idf(t) similarly varies with the inverse of the number of index documents containing term t.
VSM score of document d for query q is the
Cosine Similarity
of the weighted query vectors V(q) and V(d):
 

Note: the above equation can be viewed as the dot product of the normalized weighted vectors, in the sense that dividing V(q) by its euclidean norm is normalizing it to a unit vector.
Lucene refines VSM score for both search quality and usability:
Under the simplifying assumption of a single field in the index,
we get Lucene's Conceptual scoring formula:
 

The conceptual formula is a simplification in the sense that (1) terms and documents are fielded and (2) boosts are usually per query term rather than per query.
We now describe how Lucene implements this conceptual scoring formula, and derive from it Lucene's Practical Scoring Function.
For efficient score computation some scoring components are computed and aggregated in advance:
Lucene's Practical Scoring Function is derived from the above. The color codes demonstrate how it relates to those of the conceptual formula:
 

where
DefaultSimilarity
is:
tf(t in d) =

frequency^{½} 
DefaultSimilarity
is:
idf(t) =

1 + log ( 

) 
coord(q,d)
by the Similarity in effect at search time.
DefaultSimilarity
produces a Euclidean norm:
queryNorm(q) =
queryNorm(sumOfSquaredWeights)
=


Weight
object.
For example, a BooleanQuery
computes this value as:
sumOfSquaredWeights =
q.getBoost() ^{2}
·

∑  ( idf(t) · t.getBoost() ) ^{2} 
t in q 
setBoost()
.
Notice that there is really no direct API for accessing a boost of one term in a multi term query,
but rather multi terms are represented in a query as multi
TermQuery
objects,
and so the boost of a term in the query is accessible by calling the subquery
getBoost()
.
field.setBoost()
before adding the field to a document.
computeNorm(org.apache.lucene.index.FieldInvertState)
method is responsible for
combining all of these factors into a single float.
When a document is added to the index, all the above factors are multiplied.
If the document has multiple fields with the same name, all their boosts are multiplied together:
norm(t,d) = lengthNorm ·  ∏ 
f.boost ()

field f in d named as t 
encoded
as a single byte
before being stored.
At search time, the norm byte value is read from the index
directory
and
decoded
back to a float norm value.
This encoding/decoding, while reducing index size, comes with the price of
precision loss  it is not guaranteed that decode(encode(x)) = x.
For instance, decode(encode(0.89)) = 0.75.
Similarity
for search.
IndexWriterConfig.setSimilarity(Similarity)
,
IndexSearcher.setSimilarity(Similarity)
Nested Class Summary 

Nested classes/interfaces inherited from class org.apache.lucene.search.similarities.Similarity 

Similarity.ExactSimScorer, Similarity.SimWeight, Similarity.SloppySimScorer 
Constructor Summary  

TFIDFSimilarity()
Sole constructor. 
Method Summary  

long 
computeNorm(FieldInvertState state)
Computes the normalization value for a field, given the accumulated state of term processing for this field (see FieldInvertState ). 
Similarity.SimWeight 
computeWeight(float queryBoost,
CollectionStatistics collectionStats,
TermStatistics... termStats)
Compute any collectionlevel weight (e.g. 
abstract float 
coord(int overlap,
int maxOverlap)
Computes a score factor based on the fraction of all query terms that a document contains. 
float 
decodeNormValue(byte b)
Decodes a normalization factor stored in an index. 
byte 
encodeNormValue(float f)
Encodes a normalization factor for storage in an index. 
Similarity.ExactSimScorer 
exactSimScorer(Similarity.SimWeight stats,
AtomicReaderContext context)
Creates a new Similarity.ExactSimScorer to score matching documents from a segment of the inverted index. 
abstract float 
idf(long docFreq,
long numDocs)
Computes a score factor based on a term's document frequency (the number of documents which contain the term). 
Explanation 
idfExplain(CollectionStatistics collectionStats,
TermStatistics termStats)
Computes a score factor for a simple term and returns an explanation for that score factor. 
Explanation 
idfExplain(CollectionStatistics collectionStats,
TermStatistics[] termStats)
Computes a score factor for a phrase. 
abstract float 
lengthNorm(FieldInvertState state)
Compute an indextime normalization value for this field instance. 
abstract float 
queryNorm(float sumOfSquaredWeights)
Computes the normalization value for a query given the sum of the squared weights of each of the query terms. 
abstract float 
scorePayload(int doc,
int start,
int end,
BytesRef payload)
Calculate a scoring factor based on the data in the payload. 
abstract float 
sloppyFreq(int distance)
Computes the amount of a sloppy phrase match, based on an edit distance. 
Similarity.SloppySimScorer 
sloppySimScorer(Similarity.SimWeight stats,
AtomicReaderContext context)
Creates a new Similarity.SloppySimScorer to score matching documents from a segment of the inverted index. 
abstract float 
tf(float freq)
Computes a score factor based on a term or phrase's frequency in a document. 
float 
tf(int freq)
Computes a score factor based on a term or phrase's frequency in a document. 
Methods inherited from class java.lang.Object 

clone, equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait 
Constructor Detail 

public TFIDFSimilarity()
Method Detail 

public abstract float coord(int overlap, int maxOverlap)
The presence of a large portion of the query terms indicates a better match with the query, so implementations of this method usually return larger values when the ratio between these parameters is large and smaller values when the ratio between them is small.
coord
in class Similarity
overlap
 the number of query terms matched in the documentmaxOverlap
 the total number of terms in the query
public abstract float queryNorm(float sumOfSquaredWeights)
This does not affect ranking, but the default implementation does make scores from different queries more comparable than they would be by eliminating the magnitude of the Query vector as a factor in the score.
queryNorm
in class Similarity
sumOfSquaredWeights
 the sum of the squares of query term weights
public float tf(int freq)
idf(long, long)
factor for each term in the query and these products are then summed to
form the initial score for a document.
Terms and phrases repeated in a document indicate the topic of the
document, so implementations of this method usually return larger values
when freq
is large, and smaller values when freq
is small.
The default implementation calls tf(float)
.
freq
 the frequency of a term within a document
public abstract float tf(float freq)
idf(long, long)
factor for each term in the query and these products are then summed to
form the initial score for a document.
Terms and phrases repeated in a document indicate the topic of the
document, so implementations of this method usually return larger values
when freq
is large, and smaller values when freq
is small.
freq
 the frequency of a term within a document
public Explanation idfExplain(CollectionStatistics collectionStats, TermStatistics termStats)
The default implementation uses:
idf(docFreq, searcher.maxDoc());Note that
CollectionStatistics.maxDoc()
is used instead of
IndexReader#numDocs()
because also
TermStatistics.docFreq()
is used, and when the latter
is inaccurate, so is CollectionStatistics.maxDoc()
, and in the same direction.
In addition, CollectionStatistics.maxDoc()
is more efficient to compute
collectionStats
 collectionlevel statisticstermStats
 termlevel statistics for the term
public Explanation idfExplain(CollectionStatistics collectionStats, TermStatistics[] termStats)
The default implementation sums the idf factor for each term in the phrase.
collectionStats
 collectionlevel statisticstermStats
 termlevel statistics for the terms in the phrase
public abstract float idf(long docFreq, long numDocs)
tf(int)
factor for each term in the query and these products are
then summed to form the initial score for a document.
Terms that occur in fewer documents are better indicators of topic, so implementations of this method usually return larger values for rare terms, and smaller values for common terms.
docFreq
 the number of documents which contain the termnumDocs
 the total number of documents in the collection
public abstract float lengthNorm(FieldInvertState state)
This value will be stored in a single byte lossy representation by
encodeNormValue(float)
.
state
 statistics of the current field (such as length, boost, etc)
public final long computeNorm(FieldInvertState state)
Similarity
FieldInvertState
).
Matches in longer fields are less precise, so implementations of this
method usually set smaller values when state.getLength()
is large,
and larger values when state.getLength()
is small.
computeNorm
in class Similarity
state
 current processing state for this field
public float decodeNormValue(byte b)
encodeNormValue(float)
public byte encodeNormValue(float f)
The encoding uses a threebit mantissa, a fivebit exponent, and the zeroexponent point at 15, thus representing values from around 7x10^9 to 2x10^9 with about one significant decimal digit of accuracy. Zero is also represented. Negative numbers are rounded up to zero. Values too large to represent are rounded down to the largest representable value. Positive values too small to represent are rounded up to the smallest positive representable value.
Field.setBoost(float)
,
SmallFloat
public abstract float sloppyFreq(int distance)
A phrase match with a small edit distance to a document passage more closely matches the document, so implementations of this method usually return larger values when the edit distance is small and smaller values when it is large.
distance
 the edit distance of this sloppy phrase match
PhraseQuery.setSlop(int)
public abstract float scorePayload(int doc, int start, int end, BytesRef payload)
doc
 The docId currently being scored.start
 The start position of the payloadend
 The end position of the payloadpayload
 The payload byte array to be scored
public final Similarity.SimWeight computeWeight(float queryBoost, CollectionStatistics collectionStats, TermStatistics... termStats)
Similarity
computeWeight
in class Similarity
queryBoost
 the querytime boost.collectionStats
 collectionlevel statistics, such as the number of tokens in the collection.termStats
 termlevel statistics, such as the document frequency of a term across the collection.
public final Similarity.ExactSimScorer exactSimScorer(Similarity.SimWeight stats, AtomicReaderContext context) throws IOException
Similarity
Similarity.ExactSimScorer
to score matching documents from a segment of the inverted index.
exactSimScorer
in class Similarity
stats
 collection information from Similarity.computeWeight(float, CollectionStatistics, TermStatistics...)
context
 segment of the inverted index to be scored.
context
IOException
 if there is a lowlevel I/O errorpublic final Similarity.SloppySimScorer sloppySimScorer(Similarity.SimWeight stats, AtomicReaderContext context) throws IOException
Similarity
Similarity.SloppySimScorer
to score matching documents from a segment of the inverted index.
sloppySimScorer
in class Similarity
stats
 collection information from Similarity.computeWeight(float, CollectionStatistics, TermStatistics...)
context
 segment of the inverted index to be scored.
context
IOException
 if there is a lowlevel I/O error


PREV CLASS NEXT CLASS  FRAMES NO FRAMES  
SUMMARY: NESTED  FIELD  CONSTR  METHOD  DETAIL: FIELD  CONSTR  METHOD 