public class TokenSources extends Object
Constructor and Description |
---|
TokenSources() |
Modifier and Type | Method and Description |
---|---|
static TokenStream |
getAnyTokenStream(IndexReader reader,
int docId,
String field,
Analyzer analyzer)
A convenience method that tries a number of approaches to getting a token
stream.
|
static TokenStream |
getAnyTokenStream(IndexReader reader,
int docId,
String field,
Document doc,
Analyzer analyzer)
A convenience method that tries to first get a TermPositionVector for the
specified docId, then, falls back to using the passed in
Document to retrieve the TokenStream. |
static TokenStream |
getTokenStream(Document doc,
String field,
Analyzer analyzer) |
static TokenStream |
getTokenStream(IndexReader reader,
int docId,
String field,
Analyzer analyzer) |
static TokenStream |
getTokenStream(String field,
String contents,
Analyzer analyzer) |
static TokenStream |
getTokenStream(Terms vector) |
static TokenStream |
getTokenStream(Terms tpv,
boolean tokenPositionsGuaranteedContiguous)
Low level api.
|
static TokenStream |
getTokenStreamWithOffsets(IndexReader reader,
int docId,
String field)
Returns a
TokenStream with positions and offsets constructed from
field termvectors. |
public static TokenStream getAnyTokenStream(IndexReader reader, int docId, String field, Document doc, Analyzer analyzer) throws IOException
Document
to retrieve the TokenStream.
This is useful when you already have the document, but would prefer to use
the vector first.reader
- The IndexReader
to use to try
and get the vector fromdocId
- The docId to retrieve.field
- The field to retrieve on the documentdoc
- The document to fall back onanalyzer
- The analyzer to use for creating the TokenStream if the
vector doesn't existTokenStream
for the
IndexableField
on the
Document
IOException
- if there was an error loadingpublic static TokenStream getAnyTokenStream(IndexReader reader, int docId, String field, Analyzer analyzer) throws IOException
IOException
- If there is a low-level I/O errorpublic static TokenStream getTokenStream(Terms vector) throws IOException
IOException
public static TokenStream getTokenStream(Terms tpv, boolean tokenPositionsGuaranteedContiguous) throws IOException
Terms
. This
can be used to feed the highlighter with a pre-parsed token
stream. The Terms
must have offsets available.
In my tests the speeds to recreate 1000 token streams using this method
are: - with TermVector offset only data stored - 420 milliseconds - with
TermVector offset AND position data stored - 271 milliseconds (nb timings
for TermVector with position data are based on a tokenizer with contiguous
positions - no overlaps or gaps) The cost of not using TermPositionVector
to store pre-parsed content and using an analyzer to re-parse the original
content: - reanalyzing the original content - 980 milliseconds
The re-analyze timings will typically vary depending on - 1) The complexity
of the analyzer code (timings above were using a
stemmer/lowercaser/stopword combo) 2) The number of other fields (Lucene
reads ALL fields off the disk when accessing just one document field - can
cost dear!) 3) Use of compression on field storage - could be faster due to
compression (less disk IO) or slower (more CPU burn) depending on the
content.tokenPositionsGuaranteedContiguous
- true if the token position
numbers have no overlaps or gaps. If looking to eek out the last
drops of performance, set to true. If in doubt, set to false.IllegalArgumentException
- if no offsets are availableIOException
public static TokenStream getTokenStreamWithOffsets(IndexReader reader, int docId, String field) throws IOException
TokenStream
with positions and offsets constructed from
field termvectors. If the field has no termvectors, or positions or offsets
are not included in the termvector, return null.reader
- the IndexReader
to retrieve term vectors fromdocId
- the document to retrieve termvectors forfield
- the field to retrieve termvectors forTokenStream
, or null if positions and offsets are not availableIOException
- If there is a low-level I/O errorpublic static TokenStream getTokenStream(IndexReader reader, int docId, String field, Analyzer analyzer) throws IOException
IOException
public static TokenStream getTokenStream(Document doc, String field, Analyzer analyzer)
public static TokenStream getTokenStream(String field, String contents, Analyzer analyzer)
Copyright © 2000-2014 Apache Software Foundation. All Rights Reserved.