public final class ClassicAnalyzer extends StopwordAnalyzerBase
ClassicTokenizer with ClassicFilter, LowerCaseFilter and StopFilter, using a list of
 English stop words.
 
 ClassicAnalyzer was named StandardAnalyzer in Lucene versions prior to 3.1. 
 As of 3.1, StandardAnalyzer implements Unicode text segmentation,
 as specified by UAX#29.Analyzer.ReuseStrategy, Analyzer.TokenStreamComponents| Modifier and Type | Field and Description | 
|---|---|
static int | 
DEFAULT_MAX_TOKEN_LENGTH
Default maximum allowed token length 
 | 
static CharArraySet | 
STOP_WORDS_SET
An unmodifiable set containing some common English words that are usually not
  useful for searching. 
 | 
stopwordsGLOBAL_REUSE_STRATEGY, PER_FIELD_REUSE_STRATEGY| Constructor and Description | 
|---|
ClassicAnalyzer()
Builds an analyzer with the default stop words ( 
STOP_WORDS_SET). | 
ClassicAnalyzer(CharArraySet stopWords)
Builds an analyzer with the given stop words. 
 | 
ClassicAnalyzer(Reader stopwords)
Builds an analyzer with the stop words from the given reader. 
 | 
| Modifier and Type | Method and Description | 
|---|---|
protected Analyzer.TokenStreamComponents | 
createComponents(String fieldName)  | 
int | 
getMaxTokenLength()  | 
protected TokenStream | 
normalize(String fieldName,
         TokenStream in)  | 
void | 
setMaxTokenLength(int length)
Set maximum allowed token length. 
 | 
getStopwordSet, loadStopwordSet, loadStopwordSet, loadStopwordSetattributeFactory, close, getOffsetGap, getPositionIncrementGap, getReuseStrategy, getVersion, initReader, initReaderForNormalization, normalize, setVersion, tokenStream, tokenStreampublic static final int DEFAULT_MAX_TOKEN_LENGTH
public static final CharArraySet STOP_WORDS_SET
public ClassicAnalyzer(CharArraySet stopWords)
stopWords - stop wordspublic ClassicAnalyzer()
STOP_WORDS_SET).public ClassicAnalyzer(Reader stopwords) throws IOException
stopwords - Reader to read stop words fromIOExceptionWordlistLoader.getWordSet(Reader)public void setMaxTokenLength(int length)
public int getMaxTokenLength()
setMaxTokenLength(int)protected Analyzer.TokenStreamComponents createComponents(String fieldName)
createComponents in class Analyzerprotected TokenStream normalize(String fieldName, TokenStream in)
Copyright © 2000-2021 Apache Software Foundation. All Rights Reserved.