org.apache.lucene.analysis.ngram
Class NGramTokenizer

java.lang.Object
  extended by org.apache.lucene.util.AttributeSource
      extended by org.apache.lucene.analysis.TokenStream
          extended by org.apache.lucene.analysis.Tokenizer
              extended by org.apache.lucene.analysis.ngram.NGramTokenizer

public class NGramTokenizer
extends Tokenizer

Tokenizes the input into n-grams of the given size(s).


Nested Class Summary
 
Nested classes/interfaces inherited from class org.apache.lucene.util.AttributeSource
AttributeSource.AttributeFactory, AttributeSource.State
 
Field Summary
static int DEFAULT_MAX_NGRAM_SIZE
           
static int DEFAULT_MIN_NGRAM_SIZE
           
 
Fields inherited from class org.apache.lucene.analysis.Tokenizer
input
 
Constructor Summary
NGramTokenizer(AttributeSource.AttributeFactory factory, Reader input, int minGram, int maxGram)
          Creates NGramTokenizer with given min and max n-grams.
NGramTokenizer(AttributeSource source, Reader input, int minGram, int maxGram)
          Creates NGramTokenizer with given min and max n-grams.
NGramTokenizer(Reader input)
          Creates NGramTokenizer with default min and max n-grams.
NGramTokenizer(Reader input, int minGram, int maxGram)
          Creates NGramTokenizer with given min and max n-grams.
 
Method Summary
 void end()
          This method is called by the consumer after the last token has been consumed, after TokenStream.incrementToken() returned false (using the new TokenStream API).
 boolean incrementToken()
          Returns the next token in the stream, or null at EOS.
 Token next()
          Deprecated. Will be removed in Lucene 3.0. This method is final, as it should not be overridden. Delegates to the backwards compatibility layer.
 Token next(Token reusableToken)
          Deprecated. Will be removed in Lucene 3.0. This method is final, as it should not be overridden. Delegates to the backwards compatibility layer.
 void reset()
          Resets this stream to the beginning.
 void reset(Reader input)
          Expert: Reset the tokenizer to a new reader.
 
Methods inherited from class org.apache.lucene.analysis.Tokenizer
close, correctOffset
 
Methods inherited from class org.apache.lucene.analysis.TokenStream
getOnlyUseNewAPI, setOnlyUseNewAPI
 
Methods inherited from class org.apache.lucene.util.AttributeSource
addAttribute, addAttributeImpl, captureState, clearAttributes, cloneAttributes, equals, getAttribute, getAttributeClassesIterator, getAttributeFactory, getAttributeImplsIterator, hasAttribute, hasAttributes, hashCode, restoreState, toString
 
Methods inherited from class java.lang.Object
clone, finalize, getClass, notify, notifyAll, wait, wait, wait
 

Field Detail

DEFAULT_MIN_NGRAM_SIZE

public static final int DEFAULT_MIN_NGRAM_SIZE
See Also:
Constant Field Values

DEFAULT_MAX_NGRAM_SIZE

public static final int DEFAULT_MAX_NGRAM_SIZE
See Also:
Constant Field Values
Constructor Detail

NGramTokenizer

public NGramTokenizer(Reader input,
                      int minGram,
                      int maxGram)
Creates NGramTokenizer with given min and max n-grams.

Parameters:
input - Reader holding the input to be tokenized
minGram - the smallest n-gram to generate
maxGram - the largest n-gram to generate

NGramTokenizer

public NGramTokenizer(AttributeSource source,
                      Reader input,
                      int minGram,
                      int maxGram)
Creates NGramTokenizer with given min and max n-grams.

Parameters:
source - AttributeSource to use
input - Reader holding the input to be tokenized
minGram - the smallest n-gram to generate
maxGram - the largest n-gram to generate

NGramTokenizer

public NGramTokenizer(AttributeSource.AttributeFactory factory,
                      Reader input,
                      int minGram,
                      int maxGram)
Creates NGramTokenizer with given min and max n-grams.

Parameters:
factory - AttributeSource.AttributeFactory to use
input - Reader holding the input to be tokenized
minGram - the smallest n-gram to generate
maxGram - the largest n-gram to generate

NGramTokenizer

public NGramTokenizer(Reader input)
Creates NGramTokenizer with default min and max n-grams.

Parameters:
input - Reader holding the input to be tokenized
Method Detail

incrementToken

public final boolean incrementToken()
                             throws IOException
Returns the next token in the stream, or null at EOS.

Overrides:
incrementToken in class TokenStream
Returns:
false for end of stream; true otherwise

Note that this method will be defined abstract in Lucene 3.0.

Throws:
IOException

end

public final void end()
Description copied from class: TokenStream
This method is called by the consumer after the last token has been consumed, after TokenStream.incrementToken() returned false (using the new TokenStream API). Streams implementing the old API should upgrade to use this feature.

This method can be used to perform any end-of-stream operations, such as setting the final offset of a stream. The final offset of a stream might differ from the offset of the last token eg in case one or more whitespaces followed after the last token, but a WhitespaceTokenizer was used.

Overrides:
end in class TokenStream

next

public final Token next(Token reusableToken)
                 throws IOException
Deprecated. Will be removed in Lucene 3.0. This method is final, as it should not be overridden. Delegates to the backwards compatibility layer.

Description copied from class: TokenStream
Returns the next token in the stream, or null at EOS. When possible, the input Token should be used as the returned Token (this gives fastest tokenization performance), but this is not required and a new Token may be returned. Callers may re-use a single Token instance for successive calls to this method.

This implicitly defines a "contract" between consumers (callers of this method) and producers (implementations of this method that are the source for tokens):

Also, the producer must make no assumptions about a Token after it has been returned: the caller may arbitrarily change it. If the producer needs to hold onto the Token for subsequent calls, it must clone() it before storing it. Note that a TokenFilter is considered a consumer.

Overrides:
next in class TokenStream
Parameters:
reusableToken - a Token that may or may not be used to return; this parameter should never be null (the callee is not required to check for null before using it, but it is a good idea to assert that it is not null.)
Returns:
next Token in the stream or null if end-of-stream was hit
Throws:
IOException

next

public final Token next()
                 throws IOException
Deprecated. Will be removed in Lucene 3.0. This method is final, as it should not be overridden. Delegates to the backwards compatibility layer.

Description copied from class: TokenStream
Returns the next Token in the stream, or null at EOS.

Overrides:
next in class TokenStream
Throws:
IOException

reset

public void reset(Reader input)
           throws IOException
Description copied from class: Tokenizer
Expert: Reset the tokenizer to a new reader. Typically, an analyzer (in its reusableTokenStream method) will use this to re-use a previously created tokenizer.

Overrides:
reset in class Tokenizer
Throws:
IOException

reset

public void reset()
           throws IOException
Description copied from class: TokenStream
Resets this stream to the beginning. This is an optional operation, so subclasses may or may not implement this method. TokenStream.reset() is not needed for the standard indexing process. However, if the tokens of a TokenStream are intended to be consumed more than once, it is necessary to implement TokenStream.reset(). Note that if your TokenStream caches tokens and feeds them back again after a reset, it is imperative that you clone the tokens when you store them away (on the first pass) as well as when you return them (on future passes after TokenStream.reset()).

Overrides:
reset in class TokenStream
Throws:
IOException


Copyright © 2000-2010 Apache Software Foundation. All Rights Reserved.