Uses of Class
org.apache.lucene.analysis.Tokenizer
-
Packages that use Tokenizer Package Description org.apache.lucene.analysis Text analysis.org.apache.lucene.analysis.standard Fast, general-purpose grammar-based tokenizerStandardTokenizer
implements the Word Break rules from the Unicode Text Segmentation algorithm, as specified in Unicode Standard Annex #29. -
-
Uses of Tokenizer in org.apache.lucene.analysis
Fields in org.apache.lucene.analysis declared as Tokenizer Modifier and Type Field Description protected Tokenizer
Analyzer.TokenStreamComponents. source
Original source of the tokens.Methods in org.apache.lucene.analysis that return Tokenizer Modifier and Type Method Description Tokenizer
Analyzer.TokenStreamComponents. getTokenizer()
Returns the component'sTokenizer
Constructors in org.apache.lucene.analysis with parameters of type Tokenizer Constructor Description TokenStreamComponents(Tokenizer source)
Creates a newAnalyzer.TokenStreamComponents
instance.TokenStreamComponents(Tokenizer source, TokenStream result)
Creates a newAnalyzer.TokenStreamComponents
instance. -
Uses of Tokenizer in org.apache.lucene.analysis.standard
Subclasses of Tokenizer in org.apache.lucene.analysis.standard Modifier and Type Class Description class
StandardTokenizer
A grammar-based tokenizer constructed with JFlex.
-