Uses of Class
org.apache.lucene.analysis.Tokenizer
-
Packages that use Tokenizer Package Description org.apache.lucene.analysis Text analysis.org.apache.lucene.analysis.standard Fast, general-purpose grammar-based tokenizerStandardTokenizer
implements the Word Break rules from the Unicode Text Segmentation algorithm, as specified in Unicode Standard Annex #29. -
-
Uses of Tokenizer in org.apache.lucene.analysis
Methods in org.apache.lucene.analysis that return Tokenizer Modifier and Type Method Description Tokenizer
TokenizerFactory. create()
Creates a TokenStream of the specified input using the default attribute factory.abstract Tokenizer
TokenizerFactory. create(AttributeFactory factory)
Creates a TokenStream of the specified input using the given AttributeFactoryConstructors in org.apache.lucene.analysis with parameters of type Tokenizer Constructor Description TokenStreamComponents(Tokenizer tokenizer)
Creates a newAnalyzer.TokenStreamComponents
from a TokenizerTokenStreamComponents(Tokenizer tokenizer, TokenStream result)
Creates a newAnalyzer.TokenStreamComponents
instance -
Uses of Tokenizer in org.apache.lucene.analysis.standard
Subclasses of Tokenizer in org.apache.lucene.analysis.standard Modifier and Type Class Description class
StandardTokenizer
A grammar-based tokenizer constructed with JFlex.
-