org.apache.lucene.analysis.compound
Class CompoundWordTokenFilterBase

java.lang.Object
  extended by org.apache.lucene.util.AttributeSource
      extended by org.apache.lucene.analysis.TokenStream
          extended by org.apache.lucene.analysis.TokenFilter
              extended by org.apache.lucene.analysis.compound.CompoundWordTokenFilterBase
Direct Known Subclasses:
DictionaryCompoundWordTokenFilter, HyphenationCompoundWordTokenFilter

public abstract class CompoundWordTokenFilterBase
extends TokenFilter

Base class for decomposition token filters.


Nested Class Summary
 
Nested classes/interfaces inherited from class org.apache.lucene.util.AttributeSource
AttributeSource.AttributeFactory, AttributeSource.State
 
Field Summary
static int DEFAULT_MAX_SUBWORD_SIZE
          The default for maximal length of subwords that get propagated to the output of this filter
static int DEFAULT_MIN_SUBWORD_SIZE
          The default for minimal length of subwords that get propagated to the output of this filter
static int DEFAULT_MIN_WORD_SIZE
          The default for minimal word length that gets decomposed
protected  CharArraySet dictionary
           
protected  int maxSubwordSize
           
protected  int minSubwordSize
           
protected  int minWordSize
           
protected  boolean onlyLongestMatch
           
protected  LinkedList tokens
           
 
Fields inherited from class org.apache.lucene.analysis.TokenFilter
input
 
Constructor Summary
protected CompoundWordTokenFilterBase(TokenStream input, Set dictionary)
           
protected CompoundWordTokenFilterBase(TokenStream input, Set dictionary, boolean onlyLongestMatch)
           
protected CompoundWordTokenFilterBase(TokenStream input, Set dictionary, int minWordSize, int minSubwordSize, int maxSubwordSize, boolean onlyLongestMatch)
           
protected CompoundWordTokenFilterBase(TokenStream input, String[] dictionary)
           
protected CompoundWordTokenFilterBase(TokenStream input, String[] dictionary, boolean onlyLongestMatch)
           
protected CompoundWordTokenFilterBase(TokenStream input, String[] dictionary, int minWordSize, int minSubwordSize, int maxSubwordSize, boolean onlyLongestMatch)
           
 
Method Summary
protected static void addAllLowerCase(Set target, Collection col)
           
protected  Token createToken(int offset, int length, Token prototype)
           
protected  void decompose(Token token)
           
protected abstract  void decomposeInternal(Token token)
           
 boolean incrementToken()
          Consumers (i.e., IndexWriter) use this method to advance the stream to the next token.
static Set makeDictionary(String[] dictionary)
          Create a set of words from an array The resulting Set does case insensitive matching TODO We should look for a faster dictionary lookup approach.
protected static char[] makeLowerCaseCopy(char[] buffer)
           
 Token next()
          Deprecated. Will be removed in Lucene 3.0. This method is final, as it should not be overridden. Delegates to the backwards compatibility layer.
 Token next(Token reusableToken)
          Deprecated. Will be removed in Lucene 3.0. This method is final, as it should not be overridden. Delegates to the backwards compatibility layer.
 void reset()
          Reset the filter as well as the input TokenStream.
 
Methods inherited from class org.apache.lucene.analysis.TokenFilter
close, end
 
Methods inherited from class org.apache.lucene.analysis.TokenStream
getOnlyUseNewAPI, setOnlyUseNewAPI
 
Methods inherited from class org.apache.lucene.util.AttributeSource
addAttribute, addAttributeImpl, captureState, clearAttributes, cloneAttributes, equals, getAttribute, getAttributeClassesIterator, getAttributeFactory, getAttributeImplsIterator, hasAttribute, hasAttributes, hashCode, restoreState, toString
 
Methods inherited from class java.lang.Object
clone, finalize, getClass, notify, notifyAll, wait, wait, wait
 

Field Detail

DEFAULT_MIN_WORD_SIZE

public static final int DEFAULT_MIN_WORD_SIZE
The default for minimal word length that gets decomposed

See Also:
Constant Field Values

DEFAULT_MIN_SUBWORD_SIZE

public static final int DEFAULT_MIN_SUBWORD_SIZE
The default for minimal length of subwords that get propagated to the output of this filter

See Also:
Constant Field Values

DEFAULT_MAX_SUBWORD_SIZE

public static final int DEFAULT_MAX_SUBWORD_SIZE
The default for maximal length of subwords that get propagated to the output of this filter

See Also:
Constant Field Values

dictionary

protected final CharArraySet dictionary

tokens

protected final LinkedList tokens

minWordSize

protected final int minWordSize

minSubwordSize

protected final int minSubwordSize

maxSubwordSize

protected final int maxSubwordSize

onlyLongestMatch

protected final boolean onlyLongestMatch
Constructor Detail

CompoundWordTokenFilterBase

protected CompoundWordTokenFilterBase(TokenStream input,
                                      String[] dictionary,
                                      int minWordSize,
                                      int minSubwordSize,
                                      int maxSubwordSize,
                                      boolean onlyLongestMatch)

CompoundWordTokenFilterBase

protected CompoundWordTokenFilterBase(TokenStream input,
                                      String[] dictionary,
                                      boolean onlyLongestMatch)

CompoundWordTokenFilterBase

protected CompoundWordTokenFilterBase(TokenStream input,
                                      Set dictionary,
                                      boolean onlyLongestMatch)

CompoundWordTokenFilterBase

protected CompoundWordTokenFilterBase(TokenStream input,
                                      String[] dictionary)

CompoundWordTokenFilterBase

protected CompoundWordTokenFilterBase(TokenStream input,
                                      Set dictionary)

CompoundWordTokenFilterBase

protected CompoundWordTokenFilterBase(TokenStream input,
                                      Set dictionary,
                                      int minWordSize,
                                      int minSubwordSize,
                                      int maxSubwordSize,
                                      boolean onlyLongestMatch)
Method Detail

makeDictionary

public static final Set makeDictionary(String[] dictionary)
Create a set of words from an array The resulting Set does case insensitive matching TODO We should look for a faster dictionary lookup approach.

Parameters:
dictionary -
Returns:
Set of lowercased terms

incrementToken

public final boolean incrementToken()
                             throws IOException
Description copied from class: TokenStream
Consumers (i.e., IndexWriter) use this method to advance the stream to the next token. Implementing classes must implement this method and update the appropriate AttributeImpls with the attributes of the next token.

The producer must make no assumptions about the attributes after the method has been returned: the caller may arbitrarily change it. If the producer needs to preserve the state for subsequent calls, it can use AttributeSource.captureState() to create a copy of the current attribute state.

This method is called for every token of a document, so an efficient implementation is crucial for good performance. To avoid calls to AttributeSource.addAttribute(Class) and AttributeSource.getAttribute(Class) or downcasts, references to all AttributeImpls that this stream uses should be retrieved during instantiation.

To ensure that filters and consumers know which attributes are available, the attributes must be added during instantiation. Filters and consumers are not required to check for availability of attributes in TokenStream.incrementToken().

Overrides:
incrementToken in class TokenStream
Returns:
false for end of stream; true otherwise

Note that this method will be defined abstract in Lucene 3.0.

Throws:
IOException

next

public final Token next(Token reusableToken)
                 throws IOException
Deprecated. Will be removed in Lucene 3.0. This method is final, as it should not be overridden. Delegates to the backwards compatibility layer.

Description copied from class: TokenStream
Returns the next token in the stream, or null at EOS. When possible, the input Token should be used as the returned Token (this gives fastest tokenization performance), but this is not required and a new Token may be returned. Callers may re-use a single Token instance for successive calls to this method.

This implicitly defines a "contract" between consumers (callers of this method) and producers (implementations of this method that are the source for tokens):

Also, the producer must make no assumptions about a Token after it has been returned: the caller may arbitrarily change it. If the producer needs to hold onto the Token for subsequent calls, it must clone() it before storing it. Note that a TokenFilter is considered a consumer.

Overrides:
next in class TokenStream
Parameters:
reusableToken - a Token that may or may not be used to return; this parameter should never be null (the callee is not required to check for null before using it, but it is a good idea to assert that it is not null.)
Returns:
next Token in the stream or null if end-of-stream was hit
Throws:
IOException

next

public final Token next()
                 throws IOException
Deprecated. Will be removed in Lucene 3.0. This method is final, as it should not be overridden. Delegates to the backwards compatibility layer.

Description copied from class: TokenStream
Returns the next Token in the stream, or null at EOS.

Overrides:
next in class TokenStream
Throws:
IOException

addAllLowerCase

protected static final void addAllLowerCase(Set target,
                                            Collection col)

makeLowerCaseCopy

protected static char[] makeLowerCaseCopy(char[] buffer)

createToken

protected final Token createToken(int offset,
                                  int length,
                                  Token prototype)

decompose

protected void decompose(Token token)

decomposeInternal

protected abstract void decomposeInternal(Token token)

reset

public void reset()
           throws IOException
Description copied from class: TokenFilter
Reset the filter as well as the input TokenStream.

Overrides:
reset in class TokenFilter
Throws:
IOException


Copyright © 2000-2010 Apache Software Foundation. All Rights Reserved.