| Package | Description |
|---|---|
| org.apache.lucene.analysis.standard |
Fast, general-purpose grammar-based tokenizer
StandardTokenizer
implements the Word Break rules from the Unicode Text Segmentation algorithm, as specified in
Unicode Standard Annex #29. |
| Modifier and Type | Class and Description |
|---|---|
class |
StandardAnalyzer
Filters
StandardTokenizer with LowerCaseFilter and
StopFilter, using a configurable list of stop words. |
Copyright © 2000-2021 Apache Software Foundation. All Rights Reserved.