Class Analyzer
- java.lang.Object
-
- org.apache.lucene.analysis.Analyzer
-
- All Implemented Interfaces:
Closeable
,AutoCloseable
- Direct Known Subclasses:
AnalyzerWrapper
,StopwordAnalyzerBase
public abstract class Analyzer extends Object implements Closeable
An Analyzer builds TokenStreams, which analyze text. It thus represents a policy for extracting index terms from text.In order to define what analysis is done, subclasses must define their
TokenStreamComponents
increateComponents(String)
. The components are then reused in each call totokenStream(String, Reader)
.Simple example:
Analyzer analyzer = new Analyzer() { @Override protected TokenStreamComponents createComponents(String fieldName) { Tokenizer source = new FooTokenizer(reader); TokenStream filter = new FooFilter(source); filter = new BarFilter(filter); return new TokenStreamComponents(source, filter); } @Override protected TokenStream normalize(TokenStream in) { // Assuming FooFilter is about normalization and BarFilter is about // stemming, only FooFilter should be applied return new FooFilter(in); } };
For more examples, see theAnalysis package documentation
.For some concrete implementations bundled with Lucene, look in the analysis modules:
- Common: Analyzers for indexing content in different languages and domains.
- ICU: Exposes functionality from ICU to Apache Lucene.
- Kuromoji: Morphological analyzer for Japanese text.
- Morfologik: Dictionary-driven lemmatization for the Polish language.
- Phonetic: Analysis for indexing phonetic signatures (for sounds-alike search).
- Smart Chinese: Analyzer for Simplified Chinese, which indexes words.
- Stempel: Algorithmic Stemmer for the Polish Language.
- Since:
- 3.1
-
-
Nested Class Summary
Nested Classes Modifier and Type Class Description static class
Analyzer.ReuseStrategy
Strategy defining how TokenStreamComponents are reused per call totokenStream(String, java.io.Reader)
.static class
Analyzer.TokenStreamComponents
This class encapsulates the outer components of a token stream.
-
Field Summary
Fields Modifier and Type Field Description static Analyzer.ReuseStrategy
GLOBAL_REUSE_STRATEGY
A predefinedAnalyzer.ReuseStrategy
that reuses the same components for every field.static Analyzer.ReuseStrategy
PER_FIELD_REUSE_STRATEGY
A predefinedAnalyzer.ReuseStrategy
that reuses components per-field by maintaining a Map of TokenStreamComponent per field name.
-
Constructor Summary
Constructors Modifier Constructor Description protected
Analyzer()
Create a new Analyzer, reusing the same set of components per-thread across calls totokenStream(String, Reader)
.protected
Analyzer(Analyzer.ReuseStrategy reuseStrategy)
Expert: create a new Analyzer with a customAnalyzer.ReuseStrategy
.
-
Method Summary
All Methods Instance Methods Abstract Methods Concrete Methods Modifier and Type Method Description protected AttributeFactory
attributeFactory(String fieldName)
void
close()
Frees persistent resources used by this Analyzerprotected abstract Analyzer.TokenStreamComponents
createComponents(String fieldName)
Creates a newAnalyzer.TokenStreamComponents
instance for this analyzer.int
getOffsetGap(String fieldName)
Just likegetPositionIncrementGap(java.lang.String)
, except for Token offsets instead.int
getPositionIncrementGap(String fieldName)
Invoked before indexing a IndexableField instance if terms have already been added to that field.Analyzer.ReuseStrategy
getReuseStrategy()
Returns the usedAnalyzer.ReuseStrategy
.protected Reader
initReader(String fieldName, Reader reader)
Override this if you want to add a CharFilter chain.protected Reader
initReaderForNormalization(String fieldName, Reader reader)
Wrap the givenReader
withCharFilter
s that make sense for normalization.BytesRef
normalize(String fieldName, String text)
Normalize a string down to the representation that it would have in the index.protected TokenStream
normalize(String fieldName, TokenStream in)
Wrap the givenTokenStream
in order to apply normalization filters.TokenStream
tokenStream(String fieldName, Reader reader)
Returns a TokenStream suitable forfieldName
, tokenizing the contents ofreader
.TokenStream
tokenStream(String fieldName, String text)
Returns a TokenStream suitable forfieldName
, tokenizing the contents oftext
.
-
-
-
Field Detail
-
GLOBAL_REUSE_STRATEGY
public static final Analyzer.ReuseStrategy GLOBAL_REUSE_STRATEGY
A predefinedAnalyzer.ReuseStrategy
that reuses the same components for every field.
-
PER_FIELD_REUSE_STRATEGY
public static final Analyzer.ReuseStrategy PER_FIELD_REUSE_STRATEGY
A predefinedAnalyzer.ReuseStrategy
that reuses components per-field by maintaining a Map of TokenStreamComponent per field name.
-
-
Constructor Detail
-
Analyzer
protected Analyzer()
Create a new Analyzer, reusing the same set of components per-thread across calls totokenStream(String, Reader)
.
-
Analyzer
protected Analyzer(Analyzer.ReuseStrategy reuseStrategy)
Expert: create a new Analyzer with a customAnalyzer.ReuseStrategy
.NOTE: if you just want to reuse on a per-field basis, it's easier to use a subclass of
AnalyzerWrapper
such as PerFieldAnalyzerWrapper instead.
-
-
Method Detail
-
createComponents
protected abstract Analyzer.TokenStreamComponents createComponents(String fieldName)
Creates a newAnalyzer.TokenStreamComponents
instance for this analyzer.- Parameters:
fieldName
- the name of the fields content passed to theAnalyzer.TokenStreamComponents
sink as a reader- Returns:
- the
Analyzer.TokenStreamComponents
for this analyzer.
-
normalize
protected TokenStream normalize(String fieldName, TokenStream in)
Wrap the givenTokenStream
in order to apply normalization filters. The default implementation returns theTokenStream
as-is. This is used bynormalize(String, String)
.
-
tokenStream
public final TokenStream tokenStream(String fieldName, Reader reader)
Returns a TokenStream suitable forfieldName
, tokenizing the contents ofreader
.This method uses
createComponents(String)
to obtain an instance ofAnalyzer.TokenStreamComponents
. It returns the sink of the components and stores the components internally. Subsequent calls to this method will reuse the previously stored components after resetting them throughAnalyzer.TokenStreamComponents.setReader(Reader)
.NOTE: After calling this method, the consumer must follow the workflow described in
TokenStream
to properly consume its contents. See theAnalysis package documentation
for some examples demonstrating this.NOTE: If your data is available as a
String
, usetokenStream(String, String)
which reuses aStringReader
-like instance internally.- Parameters:
fieldName
- the name of the field the created TokenStream is used forreader
- the reader the streams source reads from- Returns:
- TokenStream for iterating the analyzed content of
reader
- Throws:
AlreadyClosedException
- if the Analyzer is closed.- See Also:
tokenStream(String, String)
-
tokenStream
public final TokenStream tokenStream(String fieldName, String text)
Returns a TokenStream suitable forfieldName
, tokenizing the contents oftext
.This method uses
createComponents(String)
to obtain an instance ofAnalyzer.TokenStreamComponents
. It returns the sink of the components and stores the components internally. Subsequent calls to this method will reuse the previously stored components after resetting them throughAnalyzer.TokenStreamComponents.setReader(Reader)
.NOTE: After calling this method, the consumer must follow the workflow described in
TokenStream
to properly consume its contents. See theAnalysis package documentation
for some examples demonstrating this.- Parameters:
fieldName
- the name of the field the created TokenStream is used fortext
- the String the streams source reads from- Returns:
- TokenStream for iterating the analyzed content of
reader
- Throws:
AlreadyClosedException
- if the Analyzer is closed.- See Also:
tokenStream(String, Reader)
-
normalize
public final BytesRef normalize(String fieldName, String text)
Normalize a string down to the representation that it would have in the index.This is typically used by query parsers in order to generate a query on a given term, without tokenizing or stemming, which are undesirable if the string to analyze is a partial word (eg. in case of a wildcard or fuzzy query).
This method uses
initReaderForNormalization(String, Reader)
in order to apply necessary character-level normalization and thennormalize(String, TokenStream)
in order to apply the normalizing token filters.
-
initReader
protected Reader initReader(String fieldName, Reader reader)
Override this if you want to add a CharFilter chain.The default implementation returns
reader
unchanged.- Parameters:
fieldName
- IndexableField name being indexedreader
- original Reader- Returns:
- reader, optionally decorated with CharFilter(s)
-
initReaderForNormalization
protected Reader initReaderForNormalization(String fieldName, Reader reader)
Wrap the givenReader
withCharFilter
s that make sense for normalization. This is typically a subset of theCharFilter
s that are applied ininitReader(String, Reader)
. This is used bynormalize(String, String)
.
-
attributeFactory
protected AttributeFactory attributeFactory(String fieldName)
Return theAttributeFactory
to be used foranalysis
andnormalization
on the givenFieldName
. The default implementation returnsTokenStream.DEFAULT_TOKEN_ATTRIBUTE_FACTORY
.
-
getPositionIncrementGap
public int getPositionIncrementGap(String fieldName)
Invoked before indexing a IndexableField instance if terms have already been added to that field. This allows custom analyzers to place an automatic position increment gap between IndexbleField instances using the same field name. The default value position increment gap is 0. With a 0 position increment gap and the typical default token position increment of 1, all terms in a field, including across IndexableField instances, are in successive positions, allowing exact PhraseQuery matches, for instance, across IndexableField instance boundaries.- Parameters:
fieldName
- IndexableField name being indexed.- Returns:
- position increment gap, added to the next token emitted from
tokenStream(String,Reader)
. This value must be>= 0
.
-
getOffsetGap
public int getOffsetGap(String fieldName)
Just likegetPositionIncrementGap(java.lang.String)
, except for Token offsets instead. By default this returns 1. This method is only called if the field produced at least one token for indexing.- Parameters:
fieldName
- the field just indexed- Returns:
- offset gap, added to the next token emitted from
tokenStream(String,Reader)
. This value must be>= 0
.
-
getReuseStrategy
public final Analyzer.ReuseStrategy getReuseStrategy()
Returns the usedAnalyzer.ReuseStrategy
.
-
close
public void close()
Frees persistent resources used by this Analyzer- Specified by:
close
in interfaceAutoCloseable
- Specified by:
close
in interfaceCloseable
-
-