public class NGramTokenizer extends Tokenizer
On the contrary to NGramTokenFilter
, this class sets offsets so
that characters between startOffset and endOffset in the original stream are
the same as the term chars.
For example, "abcde" would be tokenized as (minGram=2, maxGram=3):
Term | ab | abc | bc | bcd | cd | cde | de |
---|---|---|---|---|---|---|---|
Position increment | 1 | 1 | 1 | 1 | 1 | 1 | 1 |
Position length | 1 | 1 | 1 | 1 | 1 | 1 | 1 |
Offsets | [0,2[ | [0,3[ | [1,3[ | [1,4[ | [2,4[ | [2,5[ | [3,5[ |
This tokenizer changed a lot in Lucene 4.4 in order to:
pre-tokenize
the stream
before computing n-grams.Additionally, this class doesn't trim trailing whitespaces and emits tokens in a different order, tokens are now emitted by increasing start offsets while they used to be emitted by increasing lengths (which prevented from supporting large input streams).
Although highly discouraged, it is still possible
to use the old behavior through Lucene43NGramTokenizer
.
AttributeSource.State
Modifier and Type | Field and Description |
---|---|
static int |
DEFAULT_MAX_NGRAM_SIZE |
static int |
DEFAULT_MIN_NGRAM_SIZE |
DEFAULT_TOKEN_ATTRIBUTE_FACTORY
DEFAULT_ATTRIBUTE_FACTORY
Constructor and Description |
---|
NGramTokenizer(AttributeFactory factory,
Reader input,
int minGram,
int maxGram)
Creates NGramTokenizer with given min and max n-grams.
|
NGramTokenizer(Reader input,
int minGram,
int maxGram)
Creates NGramTokenizer with given min and max n-grams.
|
NGramTokenizer(Version version,
AttributeFactory factory,
Reader input,
int minGram,
int maxGram)
Deprecated.
For
Version.LUCENE_4_3_0 and before, use Lucene43NGramTokenizer , otherwise use NGramTokenizer(AttributeFactory, Reader, int, int) |
NGramTokenizer(Version version,
Reader input)
Creates NGramTokenizer with default min and max n-grams.
|
NGramTokenizer(Version version,
Reader input,
int minGram,
int maxGram)
Deprecated.
For
Version.LUCENE_4_3_0 and before, use Lucene43NGramTokenizer , otherwise use NGramTokenizer(Reader, int, int) |
Modifier and Type | Method and Description |
---|---|
void |
end() |
boolean |
incrementToken() |
protected boolean |
isTokenChar(int chr)
Only collect characters which satisfy this condition.
|
void |
reset() |
close, correctOffset, setReader
addAttribute, addAttributeImpl, captureState, clearAttributes, cloneAttributes, copyTo, equals, getAttribute, getAttributeClassesIterator, getAttributeFactory, getAttributeImplsIterator, hasAttribute, hasAttributes, hashCode, reflectAsString, reflectWith, restoreState, toString
public static final int DEFAULT_MIN_NGRAM_SIZE
public static final int DEFAULT_MAX_NGRAM_SIZE
public NGramTokenizer(Reader input, int minGram, int maxGram)
input
- Reader
holding the input to be tokenizedminGram
- the smallest n-gram to generatemaxGram
- the largest n-gram to generate@Deprecated public NGramTokenizer(Version version, Reader input, int minGram, int maxGram)
Version.LUCENE_4_3_0
and before, use Lucene43NGramTokenizer
, otherwise use NGramTokenizer(Reader, int, int)
public NGramTokenizer(AttributeFactory factory, Reader input, int minGram, int maxGram)
factory
- AttributeFactory
to useinput
- Reader
holding the input to be tokenizedminGram
- the smallest n-gram to generatemaxGram
- the largest n-gram to generate@Deprecated public NGramTokenizer(Version version, AttributeFactory factory, Reader input, int minGram, int maxGram)
Version.LUCENE_4_3_0
and before, use Lucene43NGramTokenizer
, otherwise use NGramTokenizer(AttributeFactory, Reader, int, int)
public final boolean incrementToken() throws IOException
incrementToken
in class TokenStream
IOException
protected boolean isTokenChar(int chr)
public final void end() throws IOException
end
in class TokenStream
IOException
public final void reset() throws IOException
reset
in class Tokenizer
IOException
Copyright © 2000-2014 Apache Software Foundation. All Rights Reserved.