org.apache.lucene.search.highlight
Class TokenSources

java.lang.Object
  extended by org.apache.lucene.search.highlight.TokenSources

public class TokenSources
extends Object

Hides implementation issues associated with obtaining a TokenStream for use with the higlighter - can obtain from TermFreqVectors with offsets and (optionally) positions or from Analyzer class reparsing the stored content.


Constructor Summary
TokenSources()
           
 
Method Summary
static TokenStream getAnyTokenStream(IndexReader reader, int docId, String field, Analyzer analyzer)
          A convenience method that tries a number of approaches to getting a token stream.
static TokenStream getAnyTokenStream(IndexReader reader, int docId, String field, Document doc, Analyzer analyzer)
          A convenience method that tries to first get a TermPositionVector for the specified docId, then, falls back to using the passed in Document to retrieve the TokenStream.
static TokenStream getTokenStream(Document doc, String field, Analyzer analyzer)
           
static TokenStream getTokenStream(IndexReader reader, int docId, String field, Analyzer analyzer)
           
static TokenStream getTokenStream(String field, String contents, Analyzer analyzer)
           
static TokenStream getTokenStream(Terms vector)
           
static TokenStream getTokenStream(Terms tpv, boolean tokenPositionsGuaranteedContiguous)
          Low level api.
static TokenStream getTokenStreamWithOffsets(IndexReader reader, int docId, String field)
          Returns a TokenStream with positions and offsets constructed from field termvectors.
 
Methods inherited from class java.lang.Object
clone, equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait
 

Constructor Detail

TokenSources

public TokenSources()
Method Detail

getAnyTokenStream

public static TokenStream getAnyTokenStream(IndexReader reader,
                                            int docId,
                                            String field,
                                            Document doc,
                                            Analyzer analyzer)
                                     throws IOException
A convenience method that tries to first get a TermPositionVector for the specified docId, then, falls back to using the passed in Document to retrieve the TokenStream. This is useful when you already have the document, but would prefer to use the vector first.

Parameters:
reader - The IndexReader to use to try and get the vector from
docId - The docId to retrieve.
field - The field to retrieve on the document
doc - The document to fall back on
analyzer - The analyzer to use for creating the TokenStream if the vector doesn't exist
Returns:
The TokenStream for the IndexableField on the Document
Throws:
IOException - if there was an error loading

getAnyTokenStream

public static TokenStream getAnyTokenStream(IndexReader reader,
                                            int docId,
                                            String field,
                                            Analyzer analyzer)
                                     throws IOException
A convenience method that tries a number of approaches to getting a token stream. The cost of finding there are no termVectors in the index is minimal (1000 invocations still registers 0 ms). So this "lazy" (flexible?) approach to coding is probably acceptable

Returns:
null if field not stored correctly
Throws:
IOException - If there is a low-level I/O error

getTokenStream

public static TokenStream getTokenStream(Terms vector)
                                  throws IOException
Throws:
IOException

getTokenStream

public static TokenStream getTokenStream(Terms tpv,
                                         boolean tokenPositionsGuaranteedContiguous)
                                  throws IOException
Low level api. Returns a token stream generated from a Terms. This can be used to feed the highlighter with a pre-parsed token stream. The Terms must have offsets available. In my tests the speeds to recreate 1000 token streams using this method are: - with TermVector offset only data stored - 420 milliseconds - with TermVector offset AND position data stored - 271 milliseconds (nb timings for TermVector with position data are based on a tokenizer with contiguous positions - no overlaps or gaps) The cost of not using TermPositionVector to store pre-parsed content and using an analyzer to re-parse the original content: - reanalyzing the original content - 980 milliseconds The re-analyze timings will typically vary depending on - 1) The complexity of the analyzer code (timings above were using a stemmer/lowercaser/stopword combo) 2) The number of other fields (Lucene reads ALL fields off the disk when accessing just one document field - can cost dear!) 3) Use of compression on field storage - could be faster due to compression (less disk IO) or slower (more CPU burn) depending on the content.

Parameters:
tokenPositionsGuaranteedContiguous - true if the token position numbers have no overlaps or gaps. If looking to eek out the last drops of performance, set to true. If in doubt, set to false.
Throws:
IllegalArgumentException - if no offsets are available
IOException

getTokenStreamWithOffsets

public static TokenStream getTokenStreamWithOffsets(IndexReader reader,
                                                    int docId,
                                                    String field)
                                             throws IOException
Returns a TokenStream with positions and offsets constructed from field termvectors. If the field has no termvectors, or positions or offsets are not included in the termvector, return null.

Parameters:
reader - the IndexReader to retrieve term vectors from
docId - the document to retrieve termvectors for
field - the field to retrieve termvectors for
Returns:
a TokenStream, or null if positions and offsets are not available
Throws:
IOException - If there is a low-level I/O error

getTokenStream

public static TokenStream getTokenStream(IndexReader reader,
                                         int docId,
                                         String field,
                                         Analyzer analyzer)
                                  throws IOException
Throws:
IOException

getTokenStream

public static TokenStream getTokenStream(Document doc,
                                         String field,
                                         Analyzer analyzer)

getTokenStream

public static TokenStream getTokenStream(String field,
                                         String contents,
                                         Analyzer analyzer)


Copyright © 2000-2013 Apache Software Foundation. All Rights Reserved.