Field and Description |
---|
org.apache.lucene.analysis.ngram.EdgeNGramTokenFilter.DEFAULT_MAX_GRAM_SIZE
since 7.4 - this value will be required.
|
org.apache.lucene.analysis.ngram.NGramTokenFilter.DEFAULT_MAX_NGRAM_SIZE
since 7.4 - this value will be required.
|
org.apache.lucene.analysis.ngram.EdgeNGramTokenFilter.DEFAULT_MIN_GRAM_SIZE
since 7.4 - this value will be required.
|
org.apache.lucene.analysis.ngram.NGramTokenFilter.DEFAULT_MIN_NGRAM_SIZE
since 7.4 - this value will be required.
|
org.apache.lucene.analysis.core.StopAnalyzer.ENGLISH_STOP_WORDS_SET |
Method and Description |
---|
org.apache.lucene.analysis.util.CharTokenizer.fromSeparatorCharPredicate(AttributeFactory, IntPredicate, IntUnaryOperator)
Normalization should be done in a subsequent TokenFilter
|
org.apache.lucene.analysis.util.CharTokenizer.fromSeparatorCharPredicate(IntPredicate, IntUnaryOperator)
Normalization should be done in a subsequent TokenFilter
|
org.apache.lucene.analysis.util.CharTokenizer.fromTokenCharPredicate(AttributeFactory, IntPredicate, IntUnaryOperator)
Normalization should be done in a subsequent TokenFilter
|
org.apache.lucene.analysis.util.CharTokenizer.fromTokenCharPredicate(IntPredicate, IntUnaryOperator)
Normalization should be done in a subsequent TokenFilter
|
org.apache.lucene.analysis.util.CharTokenizer.normalize(int)
Normalization should be done in a subsequent TokenFilter
|
Copyright © 2000-2018 Apache Software Foundation. All Rights Reserved.