Uses of Package
org.apache.lucene.analysis

Packages that use org.apache.lucene.analysis
org.apache.lucene.analysis API and code to convert text into indexable/searchable tokens. 
org.apache.lucene.analysis.standard The org.apache.lucene.analysis.standard package contains three fast grammar-based tokenizers constructed with JFlex: 
org.apache.lucene.collation CollationKeyFilter converts each token into its binary CollationKey using the provided Collator, and then encode the CollationKey as a String using IndexableBinaryStringTools, to allow it to be stored as an index term. 
org.apache.lucene.document The logical representation of a Document for indexing and searching. 
org.apache.lucene.index Code to maintain and access indices. 
org.apache.lucene.queryParser A simple query parser implemented with JavaCC. 
org.apache.lucene.search Code to search indices. 
 

Classes in org.apache.lucene.analysis used by org.apache.lucene.analysis
Analyzer
          An Analyzer builds TokenStreams, which analyze text.
BaseCharFilter
          Base utility class for implementing a CharFilter.
CharArrayMap
          A simple class that stores key Strings as char[]'s in a hash table.
CharArrayMap.EntryIterator
          public iterator class so efficient methods are exposed to users
CharArrayMap.EntrySet
          public EntrySet class so efficient methods are exposed to users
CharArraySet
          A simple class that stores Strings as char[]'s in a hash table.
CharFilter
          Subclasses of CharFilter can be chained to filter CharStream.
CharStream
          CharStream adds CharStream.correctOffset(int) functionality over Reader.
CharTokenizer
          An abstract base class for simple, character-oriented tokenizers.
FilteringTokenFilter
          Abstract base class for TokenFilters that may remove tokens.
LetterTokenizer
          A LetterTokenizer is a tokenizer that divides text at non-letters.
NormalizeCharMap
          Holds a map of String input to String output, to be used with MappingCharFilter.
NumericTokenStream
          Expert: This class provides a TokenStream for indexing numeric values that can be used by NumericRangeQuery or NumericRangeFilter.
ReusableAnalyzerBase
          An convenience subclass of Analyzer that makes it easy to implement TokenStream reuse.
ReusableAnalyzerBase.TokenStreamComponents
          This class encapsulates the outer components of a token stream.
StopwordAnalyzerBase
          Base class for Analyzers that need to make use of stopword sets.
TeeSinkTokenFilter.SinkFilter
          A filter that decides which AttributeSource states to store in the sink.
TeeSinkTokenFilter.SinkTokenStream
           
Token
          A Token is an occurrence of a term from the text of a field.
TokenFilter
          A TokenFilter is a TokenStream whose input is another TokenStream.
Tokenizer
          A Tokenizer is a TokenStream whose input is a Reader.
TokenStream
          A TokenStream enumerates the sequence of tokens, either from Fields of a Document or from query text.
 

Classes in org.apache.lucene.analysis used by org.apache.lucene.analysis.standard
Analyzer
          An Analyzer builds TokenStreams, which analyze text.
ReusableAnalyzerBase
          An convenience subclass of Analyzer that makes it easy to implement TokenStream reuse.
ReusableAnalyzerBase.TokenStreamComponents
          This class encapsulates the outer components of a token stream.
StopwordAnalyzerBase
          Base class for Analyzers that need to make use of stopword sets.
TokenFilter
          A TokenFilter is a TokenStream whose input is another TokenStream.
Tokenizer
          A Tokenizer is a TokenStream whose input is a Reader.
TokenStream
          A TokenStream enumerates the sequence of tokens, either from Fields of a Document or from query text.
 

Classes in org.apache.lucene.analysis used by org.apache.lucene.collation
Analyzer
          An Analyzer builds TokenStreams, which analyze text.
TokenFilter
          A TokenFilter is a TokenStream whose input is another TokenStream.
TokenStream
          A TokenStream enumerates the sequence of tokens, either from Fields of a Document or from query text.
 

Classes in org.apache.lucene.analysis used by org.apache.lucene.document
TokenStream
          A TokenStream enumerates the sequence of tokens, either from Fields of a Document or from query text.
 

Classes in org.apache.lucene.analysis used by org.apache.lucene.index
Analyzer
          An Analyzer builds TokenStreams, which analyze text.
 

Classes in org.apache.lucene.analysis used by org.apache.lucene.queryParser
Analyzer
          An Analyzer builds TokenStreams, which analyze text.
 

Classes in org.apache.lucene.analysis used by org.apache.lucene.search
Analyzer
          An Analyzer builds TokenStreams, which analyze text.
 



Copyright © 2000-2011 Apache Software Foundation. All Rights Reserved.