Package | Description |
---|---|
org.apache.lucene.analysis |
Text analysis.
|
org.apache.lucene.analysis.standard |
Fast, general-purpose grammar-based tokenizer
StandardTokenizer
implements the Word Break rules from the Unicode Text Segmentation algorithm, as specified in
Unicode Standard Annex #29. |
Constructor and Description |
---|
TokenStreamComponents(Tokenizer tokenizer)
Creates a new
Analyzer.TokenStreamComponents from a Tokenizer |
TokenStreamComponents(Tokenizer tokenizer,
TokenStream result)
Creates a new
Analyzer.TokenStreamComponents instance |
Modifier and Type | Class and Description |
---|---|
class |
StandardTokenizer
A grammar-based tokenizer constructed with JFlex.
|
Copyright © 2000-2019 Apache Software Foundation. All Rights Reserved.