Package | Description |
---|---|
org.apache.lucene.analysis |
Text analysis.
|
org.apache.lucene.analysis.standard |
Fast, general-purpose grammar-based tokenizer
StandardTokenizer
implements the Word Break rules from the Unicode Text Segmentation algorithm, as specified in
Unicode Standard Annex #29. |
Modifier and Type | Field and Description |
---|---|
protected Tokenizer |
Analyzer.TokenStreamComponents.source
Original source of the tokens.
|
Modifier and Type | Method and Description |
---|---|
Tokenizer |
Analyzer.TokenStreamComponents.getTokenizer()
Returns the component's
Tokenizer |
Constructor and Description |
---|
TokenStreamComponents(Tokenizer source)
Creates a new
Analyzer.TokenStreamComponents instance. |
TokenStreamComponents(Tokenizer source,
TokenStream result)
Creates a new
Analyzer.TokenStreamComponents instance. |
Modifier and Type | Class and Description |
---|---|
class |
StandardTokenizer
A grammar-based tokenizer constructed with JFlex.
|
Copyright © 2000-2018 Apache Software Foundation. All Rights Reserved.