org.apache.lucene.search.similar
Class MoreLikeThis

java.lang.Object
  extended by org.apache.lucene.search.similar.MoreLikeThis

public final class MoreLikeThis
extends Object

Generate "more like this" similarity queries. Based on this mail:

 Lucene does let you access the document frequency of terms, with IndexReader.docFreq().
 Term frequencies can be computed by re-tokenizing the text, which, for a single document,
 is usually fast enough.  But looking up the docFreq() of every term in the document is
 probably too slow.
 
 You can use some heuristics to prune the set of terms, to avoid calling docFreq() too much,
 or at all.  Since you're trying to maximize a tf*idf score, you're probably most interested
 in terms with a high tf. Choosing a tf threshold even as low as two or three will radically
 reduce the number of terms under consideration.  Another heuristic is that terms with a
 high idf (i.e., a low df) tend to be longer.  So you could threshold the terms by the
 number of characters, not selecting anything less than, e.g., six or seven characters.
 With these sorts of heuristics you can usually find small set of, e.g., ten or fewer terms
 that do a pretty good job of characterizing a document.
 
 It all depends on what you're trying to do.  If you're trying to eek out that last percent
 of precision and recall regardless of computational difficulty so that you can win a TREC
 competition, then the techniques I mention above are useless.  But if you're trying to
 provide a "more like this" button on a search results page that does a decent job and has
 good performance, such techniques might be useful.
 
 An efficient, effective "more-like-this" query generator would be a great contribution, if
 anyone's interested.  I'd imagine that it would take a Reader or a String (the document's
 text), analyzer Analyzer, and return a set of representative terms using heuristics like those
 above.  The frequency and length thresholds could be parameters, etc.
 
 Doug
 

Initial Usage

This class has lots of options to try to make it efficient and flexible. See the body of main() below in the source for real code, or if you want pseudo code, the simplest possible usage is as follows. The bold fragment is specific to this class.

 IndexReader ir = ...
 IndexSearcher is = ...
 
 MoreLikeThis mlt = new MoreLikeThis(ir);
 Reader target = ... // orig source of doc you want to find similarities to
 Query query = mlt.like( target);
 
 Hits hits = is.search(query);
 // now the usual iteration thru 'hits' - the only thing to watch for is to make sure
 you ignore the doc if it matches your 'target' document, as it should be similar to itself 

 
Thus you:
  1. do your normal, Lucene setup for searching,
  2. create a MoreLikeThis,
  3. get the text of the doc you want to find similarities to
  4. then call one of the like() calls to generate a similarity query
  5. call the searcher to find the similar docs

More Advanced Usage

You may want to use setFieldNames(...) so you can examine multiple fields (e.g. body and title) for similarity.

Depending on the size of your index and the size and makeup of your documents you may want to call the other set methods to control how the similarity queries are generated:


 Changes: Mark Harwood 29/02/04
 Some bugfixing, some refactoring, some optimisation.
  - bugfix: retrieveTerms(int docNum) was not working for indexes without a termvector -added missing code
  - bugfix: No significant terms being created for fields with a termvector - because 
            was only counting one occurrence per term/field pair in calculations(ie not including frequency info from TermVector) 
  - refactor: moved common code into isNoiseWord()
  - optimise: when no termvector support available - used maxNumTermsParsed to limit amount of tokenization
 


Field Summary
static org.apache.lucene.analysis.Analyzer DEFAULT_ANALYZER
          Default analyzer to parse source doc with.
static boolean DEFAULT_BOOST
          Boost terms in query based on score.
static String[] DEFAULT_FIELD_NAMES
          Default field names.
static int DEFAULT_MAX_DOC_FREQ
          Ignore words which occur in more than this many docs.
static int DEFAULT_MAX_NUM_TOKENS_PARSED
          Default maximum number of tokens to parse in each example doc field that is not stored with TermVector support.
static int DEFAULT_MAX_QUERY_TERMS
          Return a Query with no more than this many terms.
static int DEFAULT_MAX_WORD_LENGTH
          Ignore words greater than this length or if 0 then this has no effect.
static int DEFAULT_MIN_DOC_FREQ
          Ignore words which do not occur in at least this many docs.
static int DEFAULT_MIN_TERM_FREQ
          Ignore terms with less than this frequency in the source doc.
static int DEFAULT_MIN_WORD_LENGTH
          Ignore words less than this length or if 0 then this has no effect.
static Set<?> DEFAULT_STOP_WORDS
          Default set of stopwords.
 
Constructor Summary
MoreLikeThis(org.apache.lucene.index.IndexReader ir)
          Constructor requiring an IndexReader.
MoreLikeThis(org.apache.lucene.index.IndexReader ir, org.apache.lucene.search.Similarity sim)
           
 
Method Summary
 String describeParams()
          Describe the parameters that control how the "more like this" query is formed.
 org.apache.lucene.analysis.Analyzer getAnalyzer()
          Returns an analyzer that will be used to parse source doc with.
 float getBoostFactor()
          Returns the boost factor used when boosting terms
 String[] getFieldNames()
          Returns the field names that will be used when generating the 'More Like This' query.
 int getMaxDocFreq()
          Returns the maximum frequency in which words may still appear.
 int getMaxNumTokensParsed()
           
 int getMaxQueryTerms()
          Returns the maximum number of query terms that will be included in any generated query.
 int getMaxWordLen()
          Returns the maximum word length above which words will be ignored.
 int getMinDocFreq()
          Returns the frequency at which words will be ignored which do not occur in at least this many docs.
 int getMinTermFreq()
          Returns the frequency below which terms will be ignored in the source doc.
 int getMinWordLen()
          Returns the minimum word length below which words will be ignored.
 org.apache.lucene.search.Similarity getSimilarity()
           
 Set<?> getStopWords()
          Get the current stop words being used.
 boolean isBoost()
          Returns whether to boost terms in query based on "score" or not.
 org.apache.lucene.search.Query like(File f)
          Return a query that will return docs like the passed file.
 org.apache.lucene.search.Query like(InputStream is)
          Return a query that will return docs like the passed stream.
 org.apache.lucene.search.Query like(int docNum)
          Return a query that will return docs like the passed lucene document ID.
 org.apache.lucene.search.Query like(Reader r)
          Return a query that will return docs like the passed Reader.
 org.apache.lucene.search.Query like(URL u)
          Return a query that will return docs like the passed URL.
static void main(String[] a)
          Test driver.
 String[] retrieveInterestingTerms(int docNum)
           
 String[] retrieveInterestingTerms(Reader r)
          Convenience routine to make it easy to return the most interesting words in a document.
 org.apache.lucene.util.PriorityQueue<Object[]> retrieveTerms(int docNum)
          Find words for a more-like-this query former.
 org.apache.lucene.util.PriorityQueue<Object[]> retrieveTerms(Reader r)
          Find words for a more-like-this query former.
 void setAnalyzer(org.apache.lucene.analysis.Analyzer analyzer)
          Sets the analyzer to use.
 void setBoost(boolean boost)
          Sets whether to boost terms in query based on "score" or not.
 void setBoostFactor(float boostFactor)
          Sets the boost factor to use when boosting terms
 void setFieldNames(String[] fieldNames)
          Sets the field names that will be used when generating the 'More Like This' query.
 void setMaxDocFreq(int maxFreq)
          Set the maximum frequency in which words may still appear.
 void setMaxDocFreqPct(int maxPercentage)
          Set the maximum percentage in which words may still appear.
 void setMaxNumTokensParsed(int i)
           
 void setMaxQueryTerms(int maxQueryTerms)
          Sets the maximum number of query terms that will be included in any generated query.
 void setMaxWordLen(int maxWordLen)
          Sets the maximum word length above which words will be ignored.
 void setMinDocFreq(int minDocFreq)
          Sets the frequency at which words will be ignored which do not occur in at least this many docs.
 void setMinTermFreq(int minTermFreq)
          Sets the frequency below which terms will be ignored in the source doc.
 void setMinWordLen(int minWordLen)
          Sets the minimum word length below which words will be ignored.
 void setSimilarity(org.apache.lucene.search.Similarity similarity)
           
 void setStopWords(Set<?> stopWords)
          Set the set of stopwords.
 
Methods inherited from class java.lang.Object
clone, equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait
 

Field Detail

DEFAULT_MAX_NUM_TOKENS_PARSED

public static final int DEFAULT_MAX_NUM_TOKENS_PARSED
Default maximum number of tokens to parse in each example doc field that is not stored with TermVector support.

See Also:
getMaxNumTokensParsed(), Constant Field Values

DEFAULT_ANALYZER

public static final org.apache.lucene.analysis.Analyzer DEFAULT_ANALYZER
Default analyzer to parse source doc with.

See Also:
getAnalyzer()

DEFAULT_MIN_TERM_FREQ

public static final int DEFAULT_MIN_TERM_FREQ
Ignore terms with less than this frequency in the source doc.

See Also:
getMinTermFreq(), setMinTermFreq(int), Constant Field Values

DEFAULT_MIN_DOC_FREQ

public static final int DEFAULT_MIN_DOC_FREQ
Ignore words which do not occur in at least this many docs.

See Also:
getMinDocFreq(), setMinDocFreq(int), Constant Field Values

DEFAULT_MAX_DOC_FREQ

public static final int DEFAULT_MAX_DOC_FREQ
Ignore words which occur in more than this many docs.

See Also:
getMaxDocFreq(), setMaxDocFreq(int), setMaxDocFreqPct(int), Constant Field Values

DEFAULT_BOOST

public static final boolean DEFAULT_BOOST
Boost terms in query based on score.

See Also:
isBoost(), setBoost(boolean), Constant Field Values

DEFAULT_FIELD_NAMES

public static final String[] DEFAULT_FIELD_NAMES
Default field names. Null is used to specify that the field names should be looked up at runtime from the provided reader.


DEFAULT_MIN_WORD_LENGTH

public static final int DEFAULT_MIN_WORD_LENGTH
Ignore words less than this length or if 0 then this has no effect.

See Also:
getMinWordLen(), setMinWordLen(int), Constant Field Values

DEFAULT_MAX_WORD_LENGTH

public static final int DEFAULT_MAX_WORD_LENGTH
Ignore words greater than this length or if 0 then this has no effect.

See Also:
getMaxWordLen(), setMaxWordLen(int), Constant Field Values

DEFAULT_STOP_WORDS

public static final Set<?> DEFAULT_STOP_WORDS
Default set of stopwords. If null means to allow stop words.

See Also:
setStopWords(java.util.Set), getStopWords()

DEFAULT_MAX_QUERY_TERMS

public static final int DEFAULT_MAX_QUERY_TERMS
Return a Query with no more than this many terms.

See Also:
BooleanQuery.getMaxClauseCount(), getMaxQueryTerms(), setMaxQueryTerms(int), Constant Field Values
Constructor Detail

MoreLikeThis

public MoreLikeThis(org.apache.lucene.index.IndexReader ir)
Constructor requiring an IndexReader.


MoreLikeThis

public MoreLikeThis(org.apache.lucene.index.IndexReader ir,
                    org.apache.lucene.search.Similarity sim)
Method Detail

getBoostFactor

public float getBoostFactor()
Returns the boost factor used when boosting terms

Returns:
the boost factor used when boosting terms

setBoostFactor

public void setBoostFactor(float boostFactor)
Sets the boost factor to use when boosting terms

Parameters:
boostFactor -

getSimilarity

public org.apache.lucene.search.Similarity getSimilarity()

setSimilarity

public void setSimilarity(org.apache.lucene.search.Similarity similarity)

getAnalyzer

public org.apache.lucene.analysis.Analyzer getAnalyzer()
Returns an analyzer that will be used to parse source doc with. The default analyzer is the DEFAULT_ANALYZER.

Returns:
the analyzer that will be used to parse source doc with.
See Also:
DEFAULT_ANALYZER

setAnalyzer

public void setAnalyzer(org.apache.lucene.analysis.Analyzer analyzer)
Sets the analyzer to use. An analyzer is not required for generating a query with the like(int) method, all other 'like' methods require an analyzer.

Parameters:
analyzer - the analyzer to use to tokenize text.

getMinTermFreq

public int getMinTermFreq()
Returns the frequency below which terms will be ignored in the source doc. The default frequency is the DEFAULT_MIN_TERM_FREQ.

Returns:
the frequency below which terms will be ignored in the source doc.

setMinTermFreq

public void setMinTermFreq(int minTermFreq)
Sets the frequency below which terms will be ignored in the source doc.

Parameters:
minTermFreq - the frequency below which terms will be ignored in the source doc.

getMinDocFreq

public int getMinDocFreq()
Returns the frequency at which words will be ignored which do not occur in at least this many docs. The default frequency is DEFAULT_MIN_DOC_FREQ.

Returns:
the frequency at which words will be ignored which do not occur in at least this many docs.

setMinDocFreq

public void setMinDocFreq(int minDocFreq)
Sets the frequency at which words will be ignored which do not occur in at least this many docs.

Parameters:
minDocFreq - the frequency at which words will be ignored which do not occur in at least this many docs.

getMaxDocFreq

public int getMaxDocFreq()
Returns the maximum frequency in which words may still appear. Words that appear in more than this many docs will be ignored. The default frequency is DEFAULT_MAX_DOC_FREQ.

Returns:
get the maximum frequency at which words are still allowed, words which occur in more docs than this are ignored.

setMaxDocFreq

public void setMaxDocFreq(int maxFreq)
Set the maximum frequency in which words may still appear. Words that appear in more than this many docs will be ignored.

Parameters:
maxFreq - the maximum count of documents that a term may appear in to be still considered relevant

setMaxDocFreqPct

public void setMaxDocFreqPct(int maxPercentage)
Set the maximum percentage in which words may still appear. Words that appear in more than this many percent of all docs will be ignored.

Parameters:
maxPercentage - the maximum percentage of documents (0-100) that a term may appear in to be still considered relevant

isBoost

public boolean isBoost()
Returns whether to boost terms in query based on "score" or not. The default is DEFAULT_BOOST.

Returns:
whether to boost terms in query based on "score" or not.
See Also:
setBoost(boolean)

setBoost

public void setBoost(boolean boost)
Sets whether to boost terms in query based on "score" or not.

Parameters:
boost - true to boost terms in query based on "score", false otherwise.
See Also:
isBoost()

getFieldNames

public String[] getFieldNames()
Returns the field names that will be used when generating the 'More Like This' query. The default field names that will be used is DEFAULT_FIELD_NAMES.

Returns:
the field names that will be used when generating the 'More Like This' query.

setFieldNames

public void setFieldNames(String[] fieldNames)
Sets the field names that will be used when generating the 'More Like This' query. Set this to null for the field names to be determined at runtime from the IndexReader provided in the constructor.

Parameters:
fieldNames - the field names that will be used when generating the 'More Like This' query.

getMinWordLen

public int getMinWordLen()
Returns the minimum word length below which words will be ignored. Set this to 0 for no minimum word length. The default is DEFAULT_MIN_WORD_LENGTH.

Returns:
the minimum word length below which words will be ignored.

setMinWordLen

public void setMinWordLen(int minWordLen)
Sets the minimum word length below which words will be ignored.

Parameters:
minWordLen - the minimum word length below which words will be ignored.

getMaxWordLen

public int getMaxWordLen()
Returns the maximum word length above which words will be ignored. Set this to 0 for no maximum word length. The default is DEFAULT_MAX_WORD_LENGTH.

Returns:
the maximum word length above which words will be ignored.

setMaxWordLen

public void setMaxWordLen(int maxWordLen)
Sets the maximum word length above which words will be ignored.

Parameters:
maxWordLen - the maximum word length above which words will be ignored.

setStopWords

public void setStopWords(Set<?> stopWords)
Set the set of stopwords. Any word in this set is considered "uninteresting" and ignored. Even if your Analyzer allows stopwords, you might want to tell the MoreLikeThis code to ignore them, as for the purposes of document similarity it seems reasonable to assume that "a stop word is never interesting".

Parameters:
stopWords - set of stopwords, if null it means to allow stop words
See Also:
StopFilter.makeStopSet(), getStopWords()

getStopWords

public Set<?> getStopWords()
Get the current stop words being used.

See Also:
setStopWords(java.util.Set)

getMaxQueryTerms

public int getMaxQueryTerms()
Returns the maximum number of query terms that will be included in any generated query. The default is DEFAULT_MAX_QUERY_TERMS.

Returns:
the maximum number of query terms that will be included in any generated query.

setMaxQueryTerms

public void setMaxQueryTerms(int maxQueryTerms)
Sets the maximum number of query terms that will be included in any generated query.

Parameters:
maxQueryTerms - the maximum number of query terms that will be included in any generated query.

getMaxNumTokensParsed

public int getMaxNumTokensParsed()
Returns:
The maximum number of tokens to parse in each example doc field that is not stored with TermVector support
See Also:
DEFAULT_MAX_NUM_TOKENS_PARSED

setMaxNumTokensParsed

public void setMaxNumTokensParsed(int i)
Parameters:
i - The maximum number of tokens to parse in each example doc field that is not stored with TermVector support

like

public org.apache.lucene.search.Query like(int docNum)
                                    throws IOException
Return a query that will return docs like the passed lucene document ID.

Parameters:
docNum - the documentID of the lucene doc to generate the 'More Like This" query for.
Returns:
a query that will return docs like the passed lucene document ID.
Throws:
IOException

like

public org.apache.lucene.search.Query like(File f)
                                    throws IOException
Return a query that will return docs like the passed file.

Returns:
a query that will return docs like the passed file.
Throws:
IOException

like

public org.apache.lucene.search.Query like(URL u)
                                    throws IOException
Return a query that will return docs like the passed URL.

Returns:
a query that will return docs like the passed URL.
Throws:
IOException

like

public org.apache.lucene.search.Query like(InputStream is)
                                    throws IOException
Return a query that will return docs like the passed stream.

Returns:
a query that will return docs like the passed stream.
Throws:
IOException

like

public org.apache.lucene.search.Query like(Reader r)
                                    throws IOException
Return a query that will return docs like the passed Reader.

Returns:
a query that will return docs like the passed Reader.
Throws:
IOException

describeParams

public String describeParams()
Describe the parameters that control how the "more like this" query is formed.


main

public static void main(String[] a)
                 throws Throwable
Test driver. Pass in "-i INDEX" and then either "-fn FILE" or "-url URL".

Throws:
Throwable

retrieveTerms

public org.apache.lucene.util.PriorityQueue<Object[]> retrieveTerms(int docNum)
                                                             throws IOException
Find words for a more-like-this query former.

Parameters:
docNum - the id of the lucene document from which to find terms
Throws:
IOException

retrieveTerms

public org.apache.lucene.util.PriorityQueue<Object[]> retrieveTerms(Reader r)
                                                             throws IOException
Find words for a more-like-this query former. The result is a priority queue of arrays with one entry for every word in the document. Each array has 6 elements. The elements are:
  1. The word (String)
  2. The top field that this word comes from (String)
  3. The score for this word (Float)
  4. The IDF value (Float)
  5. The frequency of this word in the index (Integer)
  6. The frequency of this word in the source document (Integer)
This is a somewhat "advanced" routine, and in general only the 1st entry in the array is of interest. This method is exposed so that you can identify the "interesting words" in a document. For an easier method to call see retrieveInterestingTerms().

Parameters:
r - the reader that has the content of the document
Returns:
the most interesting words in the document ordered by score, with the highest scoring, or best entry, first
Throws:
IOException
See Also:
retrieveInterestingTerms(int)

retrieveInterestingTerms

public String[] retrieveInterestingTerms(int docNum)
                                  throws IOException
Throws:
IOException
See Also:
retrieveInterestingTerms(java.io.Reader)

retrieveInterestingTerms

public String[] retrieveInterestingTerms(Reader r)
                                  throws IOException
Convenience routine to make it easy to return the most interesting words in a document. More advanced users will call retrieveTerms() directly.

Parameters:
r - the source document
Returns:
the most interesting words in the document
Throws:
IOException
See Also:
retrieveTerms(java.io.Reader), setMaxQueryTerms(int)


Copyright © 2000-2010 Apache Software Foundation. All Rights Reserved.