org.apache.lucene.analysis.ngram
Class EdgeNGramTokenizer

java.lang.Object
  extended by org.apache.lucene.util.AttributeSource
      extended by org.apache.lucene.analysis.TokenStream
          extended by org.apache.lucene.analysis.Tokenizer
              extended by org.apache.lucene.analysis.ngram.EdgeNGramTokenizer

public class EdgeNGramTokenizer
extends Tokenizer

Tokenizes the input from an edge into n-grams of given size(s).

This Tokenizer create n-grams from the beginning edge or ending edge of a input token. MaxGram can't be larger than 1024 because of limitation.


Nested Class Summary
static class EdgeNGramTokenizer.Side
          Specifies which side of the input the n-gram should be generated from
 
Nested classes/interfaces inherited from class org.apache.lucene.util.AttributeSource
AttributeSource.AttributeFactory, AttributeSource.State
 
Field Summary
static int DEFAULT_MAX_GRAM_SIZE
           
static int DEFAULT_MIN_GRAM_SIZE
           
static EdgeNGramTokenizer.Side DEFAULT_SIDE
           
 
Fields inherited from class org.apache.lucene.analysis.Tokenizer
input
 
Constructor Summary
EdgeNGramTokenizer(AttributeSource.AttributeFactory factory, Reader input, EdgeNGramTokenizer.Side side, int minGram, int maxGram)
          Creates EdgeNGramTokenizer that can generate n-grams in the sizes of the given range
EdgeNGramTokenizer(AttributeSource.AttributeFactory factory, Reader input, String sideLabel, int minGram, int maxGram)
          Creates EdgeNGramTokenizer that can generate n-grams in the sizes of the given range
EdgeNGramTokenizer(AttributeSource source, Reader input, EdgeNGramTokenizer.Side side, int minGram, int maxGram)
          Creates EdgeNGramTokenizer that can generate n-grams in the sizes of the given range
EdgeNGramTokenizer(AttributeSource source, Reader input, String sideLabel, int minGram, int maxGram)
          Creates EdgeNGramTokenizer that can generate n-grams in the sizes of the given range
EdgeNGramTokenizer(Reader input, EdgeNGramTokenizer.Side side, int minGram, int maxGram)
          Creates EdgeNGramTokenizer that can generate n-grams in the sizes of the given range
EdgeNGramTokenizer(Reader input, String sideLabel, int minGram, int maxGram)
          Creates EdgeNGramTokenizer that can generate n-grams in the sizes of the given range
 
Method Summary
 void end()
          This method is called by the consumer after the last token has been consumed, after TokenStream.incrementToken() returned false (using the new TokenStream API).
 boolean incrementToken()
          Returns the next token in the stream, or null at EOS.
 Token next()
          Deprecated. Will be removed in Lucene 3.0. This method is final, as it should not be overridden. Delegates to the backwards compatibility layer.
 Token next(Token reusableToken)
          Deprecated. Will be removed in Lucene 3.0. This method is final, as it should not be overridden. Delegates to the backwards compatibility layer.
 void reset()
          Resets this stream to the beginning.
 void reset(Reader input)
          Expert: Reset the tokenizer to a new reader.
 
Methods inherited from class org.apache.lucene.analysis.Tokenizer
close, correctOffset
 
Methods inherited from class org.apache.lucene.analysis.TokenStream
getOnlyUseNewAPI, setOnlyUseNewAPI
 
Methods inherited from class org.apache.lucene.util.AttributeSource
addAttribute, addAttributeImpl, captureState, clearAttributes, cloneAttributes, equals, getAttribute, getAttributeClassesIterator, getAttributeFactory, getAttributeImplsIterator, hasAttribute, hasAttributes, hashCode, restoreState, toString
 
Methods inherited from class java.lang.Object
clone, finalize, getClass, notify, notifyAll, wait, wait, wait
 

Field Detail

DEFAULT_SIDE

public static final EdgeNGramTokenizer.Side DEFAULT_SIDE

DEFAULT_MAX_GRAM_SIZE

public static final int DEFAULT_MAX_GRAM_SIZE
See Also:
Constant Field Values

DEFAULT_MIN_GRAM_SIZE

public static final int DEFAULT_MIN_GRAM_SIZE
See Also:
Constant Field Values
Constructor Detail

EdgeNGramTokenizer

public EdgeNGramTokenizer(Reader input,
                          EdgeNGramTokenizer.Side side,
                          int minGram,
                          int maxGram)
Creates EdgeNGramTokenizer that can generate n-grams in the sizes of the given range

Parameters:
input - Reader holding the input to be tokenized
side - the EdgeNGramTokenizer.Side from which to chop off an n-gram
minGram - the smallest n-gram to generate
maxGram - the largest n-gram to generate

EdgeNGramTokenizer

public EdgeNGramTokenizer(AttributeSource source,
                          Reader input,
                          EdgeNGramTokenizer.Side side,
                          int minGram,
                          int maxGram)
Creates EdgeNGramTokenizer that can generate n-grams in the sizes of the given range

Parameters:
source - AttributeSource to use
input - Reader holding the input to be tokenized
side - the EdgeNGramTokenizer.Side from which to chop off an n-gram
minGram - the smallest n-gram to generate
maxGram - the largest n-gram to generate

EdgeNGramTokenizer

public EdgeNGramTokenizer(AttributeSource.AttributeFactory factory,
                          Reader input,
                          EdgeNGramTokenizer.Side side,
                          int minGram,
                          int maxGram)
Creates EdgeNGramTokenizer that can generate n-grams in the sizes of the given range

Parameters:
factory - AttributeSource.AttributeFactory to use
input - Reader holding the input to be tokenized
side - the EdgeNGramTokenizer.Side from which to chop off an n-gram
minGram - the smallest n-gram to generate
maxGram - the largest n-gram to generate

EdgeNGramTokenizer

public EdgeNGramTokenizer(Reader input,
                          String sideLabel,
                          int minGram,
                          int maxGram)
Creates EdgeNGramTokenizer that can generate n-grams in the sizes of the given range

Parameters:
input - Reader holding the input to be tokenized
sideLabel - the name of the EdgeNGramTokenizer.Side from which to chop off an n-gram
minGram - the smallest n-gram to generate
maxGram - the largest n-gram to generate

EdgeNGramTokenizer

public EdgeNGramTokenizer(AttributeSource source,
                          Reader input,
                          String sideLabel,
                          int minGram,
                          int maxGram)
Creates EdgeNGramTokenizer that can generate n-grams in the sizes of the given range

Parameters:
source - AttributeSource to use
input - Reader holding the input to be tokenized
sideLabel - the name of the EdgeNGramTokenizer.Side from which to chop off an n-gram
minGram - the smallest n-gram to generate
maxGram - the largest n-gram to generate

EdgeNGramTokenizer

public EdgeNGramTokenizer(AttributeSource.AttributeFactory factory,
                          Reader input,
                          String sideLabel,
                          int minGram,
                          int maxGram)
Creates EdgeNGramTokenizer that can generate n-grams in the sizes of the given range

Parameters:
factory - AttributeSource.AttributeFactory to use
input - Reader holding the input to be tokenized
sideLabel - the name of the EdgeNGramTokenizer.Side from which to chop off an n-gram
minGram - the smallest n-gram to generate
maxGram - the largest n-gram to generate
Method Detail

incrementToken

public final boolean incrementToken()
                             throws IOException
Returns the next token in the stream, or null at EOS.

Overrides:
incrementToken in class TokenStream
Returns:
false for end of stream; true otherwise

Note that this method will be defined abstract in Lucene 3.0.

Throws:
IOException

end

public final void end()
Description copied from class: TokenStream
This method is called by the consumer after the last token has been consumed, after TokenStream.incrementToken() returned false (using the new TokenStream API). Streams implementing the old API should upgrade to use this feature.

This method can be used to perform any end-of-stream operations, such as setting the final offset of a stream. The final offset of a stream might differ from the offset of the last token eg in case one or more whitespaces followed after the last token, but a WhitespaceTokenizer was used.

Overrides:
end in class TokenStream

next

public final Token next(Token reusableToken)
                 throws IOException
Deprecated. Will be removed in Lucene 3.0. This method is final, as it should not be overridden. Delegates to the backwards compatibility layer.

Description copied from class: TokenStream
Returns the next token in the stream, or null at EOS. When possible, the input Token should be used as the returned Token (this gives fastest tokenization performance), but this is not required and a new Token may be returned. Callers may re-use a single Token instance for successive calls to this method.

This implicitly defines a "contract" between consumers (callers of this method) and producers (implementations of this method that are the source for tokens):

Also, the producer must make no assumptions about a Token after it has been returned: the caller may arbitrarily change it. If the producer needs to hold onto the Token for subsequent calls, it must clone() it before storing it. Note that a TokenFilter is considered a consumer.

Overrides:
next in class TokenStream
Parameters:
reusableToken - a Token that may or may not be used to return; this parameter should never be null (the callee is not required to check for null before using it, but it is a good idea to assert that it is not null.)
Returns:
next Token in the stream or null if end-of-stream was hit
Throws:
IOException

next

public final Token next()
                 throws IOException
Deprecated. Will be removed in Lucene 3.0. This method is final, as it should not be overridden. Delegates to the backwards compatibility layer.

Description copied from class: TokenStream
Returns the next Token in the stream, or null at EOS.

Overrides:
next in class TokenStream
Throws:
IOException

reset

public void reset(Reader input)
           throws IOException
Description copied from class: Tokenizer
Expert: Reset the tokenizer to a new reader. Typically, an analyzer (in its reusableTokenStream method) will use this to re-use a previously created tokenizer.

Overrides:
reset in class Tokenizer
Throws:
IOException

reset

public void reset()
           throws IOException
Description copied from class: TokenStream
Resets this stream to the beginning. This is an optional operation, so subclasses may or may not implement this method. TokenStream.reset() is not needed for the standard indexing process. However, if the tokens of a TokenStream are intended to be consumed more than once, it is necessary to implement TokenStream.reset(). Note that if your TokenStream caches tokens and feeds them back again after a reset, it is imperative that you clone the tokens when you store them away (on the first pass) as well as when you return them (on future passes after TokenStream.reset()).

Overrides:
reset in class TokenStream
Throws:
IOException


Copyright © 2000-2010 Apache Software Foundation. All Rights Reserved.