A strategy for computing the collection language model.
This class acts as the base class for the implementations of the first normalization of the informative content in the DFR framework.
Model of the information gain based on the ratio of two Bernoulli processes.
Model of the information gain based on Laplace's law of succession.
Axiomatic approaches for IR.
F1EXP is defined as Sum(tf(term_doc_freq)*ln(docLen)*IDF(term)) where IDF(t) = pow((N+1)/df(t), k) N=total num of docs, df=doc freq
F1LOG is defined as Sum(tf(term_doc_freq)*ln(docLen)*IDF(term)) where IDF(t) = ln((N+1)/df(t)) N=total num of docs, df=doc freq
F2EXP is defined as Sum(tfln(term_doc_freq, docLen)*IDF(term)) where IDF(t) = pow((N+1)/df(t), k) N=total num of docs, df=doc freq
F2EXP is defined as Sum(tfln(term_doc_freq, docLen)*IDF(term)) where IDF(t) = ln((N+1)/df(t)) N=total num of docs, df=doc freq
F3EXP is defined as Sum(tf(term_doc_freq)*IDF(term)-gamma(docLen, queryLen)) where IDF(t) = pow((N+1)/df(t), k) N=total num of docs, df=doc freq gamma(docLen, queryLen) = (docLen-queryLen)*queryLen*s/avdl NOTE: the gamma function of this similarity creates negative scores
F3EXP is defined as Sum(tf(term_doc_freq)*IDF(term)-gamma(docLen, queryLen)) where IDF(t) = ln((N+1)/df(t)) N=total num of docs, df=doc freq gamma(docLen, queryLen) = (docLen-queryLen)*queryLen*s/avdl NOTE: the gamma function of this similarity creates negative scores
This class acts as the base class for the specific basic model implementations in the DFR framework.
Geometric as limiting form of the Bose-Einstein model.
An approximation of the I(ne) model.
The basic tf-idf model of randomness.
Tf-idf model of randomness, based on a mixture of Poisson and inverse document frequency.
Stores all statistics commonly used ranking methods.
Simple similarity that gives terms a score that is equal to their query boost.
Expert: Historical scoring implementation.
Implements the Divergence from Independence (DFI) model based on Chi-square statistics (i.e., standardized Chi-squared distance from independence in term frequency tf).
Implements the divergence from randomness (DFR) framework introduced in Gianni Amati and Cornelis Joost Van Rijsbergen.
The probabilistic distribution used to model term occurrence in information-based models.
The smoothed power-law (SPL) distribution for the information-based framework that is described in the original paper.
Provides a framework for the family of information-based models, as described in Stéphane Clinchant and Eric Gaussier.
Computes the measure of divergence from independence for DFI scoring functions.
Normalized chi-squared measure of distance from independence
Saturated measure of distance from independence
Standardized measure of distance from independence
The lambda (λw) parameter in information-based models.
Computes lambda as
Computes lambda as
Bayesian smoothing using Dirichlet priors.
Language model based on the Jelinek-Mercer smoothing method.
Abstract superclass for language modeling Similarities.
Stores the collection distribution of the current term.
Implements the CombSUM method for combining evidence from multiple similarity values described in: Joseph A.
This class acts as the base class for the implementations of the term frequency normalization methods in the DFR framework.
Implementation used when there is no normalization.
Normalization model that assumes a uniform distribution of the term frequency.
Normalization model in which the term frequency is inversely related to the length.
Dirichlet Priors normalization
Provides the ability to use a different
Similarity defines the components of Lucene scoring.
Stores the weight for a query across the indexed collection.
A subclass of
Similarityserves as the base for ranking functions. For searching, users can employ the models already implemented or create their own by extending one of the classes in this package.
BM25Similarity is an optimized
implementation of the successful Okapi BM25 model.
SimilarityBase provides a basic
implementation of the Similarity contract and exposes a highly simplified
interface, which makes it an ideal starting point for new ranking functions.
Lucene ships the following methods built on
SimilarityBaseis not optimized to the same extent as
BM25Similarity, a difference in performance is to be expected when using the methods listed above. However, optimizations can always be implemented in subclasses; see below.
Chances are the available Similarities are sufficient for all
your searching needs.
However, in some applications it may be necessary to customize your Similarity implementation. For instance, some
applications do not need to distinguish between shorter and longer documents
and could set BM25's
Similarity, one must do so for both indexing and
searching, and the changes must happen before
either of these actions take place. Although in theory there is nothing stopping you from changing mid-stream, it
just isn't well-defined what is going to happen.
To make this change, implement your own
you'll want to simply subclass
then register the new class by calling
before indexing and
two parameters that may be tuned:
0makes term frequency completely ignored, making documents scored only based on the value of the IDF of the matched terms. Higher values of k1 increase the impact of term frequency on the final score. Default value is
[0, 1]. A value of
0disables length normalization completely. Default value is
The easiest way to quickly implement a new ranking method is to extend
SimilarityBase, which provides
basic implementations for the low level . Subclasses are only required to
SimilarityBase.score(BasicStats, double, double)
Another option is to extend one of the frameworks
Similarities are implemented modularly, e.g.
computation of the three parts of its formula to the classes
Normalization. Instead of
subclassing the Similarity, one can simply introduce a new basic model and tell
DFRSimilarity to use it.
Copyright © 2000-2020 Apache Software Foundation. All Rights Reserved.