org.apache.lucene.analysis.ngram
Class EdgeNGramTokenizer
java.lang.Object
org.apache.lucene.util.AttributeSource
org.apache.lucene.analysis.TokenStream
org.apache.lucene.analysis.Tokenizer
org.apache.lucene.analysis.ngram.NGramTokenizer
org.apache.lucene.analysis.ngram.EdgeNGramTokenizer
- All Implemented Interfaces:
- Closeable
public class EdgeNGramTokenizer
- extends NGramTokenizer
Tokenizes the input from an edge into n-grams of given size(s).
This Tokenizer
create n-grams from the beginning edge or ending edge of a input token.
As of Lucene 4.4, this tokenizer
- can handle
maxGram
larger than 1024 chars, but beware that this will result in increased memory usage
- doesn't trim the input,
- sets position increments equal to 1 instead of 1 for the first token and 0 for all other ones
- doesn't support backward n-grams anymore.
- supports
pre-tokenization
,
- correctly handles supplementary characters.
Although highly discouraged, it is still possible
to use the old behavior through Lucene43EdgeNGramTokenizer
.
Fields inherited from class org.apache.lucene.analysis.Tokenizer |
input |
Methods inherited from class org.apache.lucene.util.AttributeSource |
addAttribute, addAttributeImpl, captureState, clearAttributes, cloneAttributes, copyTo, equals, getAttribute, getAttributeClassesIterator, getAttributeFactory, getAttributeImplsIterator, hasAttribute, hasAttributes, hashCode, reflectAsString, reflectWith, restoreState |
DEFAULT_MAX_GRAM_SIZE
public static final int DEFAULT_MAX_GRAM_SIZE
- See Also:
- Constant Field Values
DEFAULT_MIN_GRAM_SIZE
public static final int DEFAULT_MIN_GRAM_SIZE
- See Also:
- Constant Field Values
EdgeNGramTokenizer
public EdgeNGramTokenizer(Version version,
Reader input,
int minGram,
int maxGram)
- Creates EdgeNGramTokenizer that can generate n-grams in the sizes of the given range
- Parameters:
version
- the Lucene match versioninput
- Reader
holding the input to be tokenizedminGram
- the smallest n-gram to generatemaxGram
- the largest n-gram to generate
EdgeNGramTokenizer
public EdgeNGramTokenizer(Version version,
AttributeSource.AttributeFactory factory,
Reader input,
int minGram,
int maxGram)
- Creates EdgeNGramTokenizer that can generate n-grams in the sizes of the given range
- Parameters:
version
- the Lucene match versionfactory
- AttributeSource.AttributeFactory
to useinput
- Reader
holding the input to be tokenizedminGram
- the smallest n-gram to generatemaxGram
- the largest n-gram to generate
Copyright © 2000-2013 Apache Software Foundation. All Rights Reserved.