|
||||||||||
PREV CLASS NEXT CLASS | FRAMES NO FRAMES | |||||||||
SUMMARY: NESTED | FIELD | CONSTR | METHOD | DETAIL: FIELD | CONSTR | METHOD |
java.lang.Object org.apache.lucene.util.AttributeSource org.apache.lucene.analysis.TokenStream org.apache.lucene.analysis.TokenFilter org.apache.lucene.analysis.compound.CompoundWordTokenFilterBase
public abstract class CompoundWordTokenFilterBase
Base class for decomposition token filters.
You must specify the required Version
compatibility when creating
CompoundWordTokenFilterBase:
Nested Class Summary |
---|
Nested classes/interfaces inherited from class org.apache.lucene.util.AttributeSource |
---|
AttributeSource.AttributeFactory, AttributeSource.State |
Field Summary | |
---|---|
static int |
DEFAULT_MAX_SUBWORD_SIZE
The default for maximal length of subwords that get propagated to the output of this filter |
static int |
DEFAULT_MIN_SUBWORD_SIZE
The default for minimal length of subwords that get propagated to the output of this filter |
static int |
DEFAULT_MIN_WORD_SIZE
The default for minimal word length that gets decomposed |
protected CharArraySet |
dictionary
|
protected int |
maxSubwordSize
|
protected int |
minSubwordSize
|
protected int |
minWordSize
|
protected boolean |
onlyLongestMatch
|
protected LinkedList<Token> |
tokens
|
Fields inherited from class org.apache.lucene.analysis.TokenFilter |
---|
input |
Constructor Summary | |
---|---|
protected |
CompoundWordTokenFilterBase(TokenStream input,
Set<?> dictionary)
Deprecated. use CompoundWordTokenFilterBase(Version, TokenStream, Set) instead |
protected |
CompoundWordTokenFilterBase(TokenStream input,
Set<?> dictionary,
boolean onlyLongestMatch)
Deprecated. use CompoundWordTokenFilterBase(Version, TokenStream, Set, boolean) instead |
protected |
CompoundWordTokenFilterBase(TokenStream input,
Set<?> dictionary,
int minWordSize,
int minSubwordSize,
int maxSubwordSize,
boolean onlyLongestMatch)
Deprecated. use CompoundWordTokenFilterBase(Version, TokenStream, Set, int, int, int, boolean) instead |
protected |
CompoundWordTokenFilterBase(TokenStream input,
String[] dictionary)
Deprecated. use CompoundWordTokenFilterBase(Version, TokenStream, String[]) instead |
protected |
CompoundWordTokenFilterBase(TokenStream input,
String[] dictionary,
boolean onlyLongestMatch)
Deprecated. use CompoundWordTokenFilterBase(Version, TokenStream, String[], boolean) instead |
protected |
CompoundWordTokenFilterBase(TokenStream input,
String[] dictionary,
int minWordSize,
int minSubwordSize,
int maxSubwordSize,
boolean onlyLongestMatch)
Deprecated. use CompoundWordTokenFilterBase(Version, TokenStream, String[], int, int, int, boolean) instead |
protected |
CompoundWordTokenFilterBase(Version matchVersion,
TokenStream input,
Set<?> dictionary)
|
protected |
CompoundWordTokenFilterBase(Version matchVersion,
TokenStream input,
Set<?> dictionary,
boolean onlyLongestMatch)
|
protected |
CompoundWordTokenFilterBase(Version matchVersion,
TokenStream input,
Set<?> dictionary,
int minWordSize,
int minSubwordSize,
int maxSubwordSize,
boolean onlyLongestMatch)
|
protected |
CompoundWordTokenFilterBase(Version matchVersion,
TokenStream input,
String[] dictionary)
|
protected |
CompoundWordTokenFilterBase(Version matchVersion,
TokenStream input,
String[] dictionary,
boolean onlyLongestMatch)
|
protected |
CompoundWordTokenFilterBase(Version matchVersion,
TokenStream input,
String[] dictionary,
int minWordSize,
int minSubwordSize,
int maxSubwordSize,
boolean onlyLongestMatch)
|
Method Summary | |
---|---|
protected static void |
addAllLowerCase(CharArraySet target,
Collection<?> col)
|
protected Token |
createToken(int offset,
int length,
Token prototype)
|
protected void |
decompose(Token token)
|
protected abstract void |
decomposeInternal(Token token)
|
boolean |
incrementToken()
Consumers (i.e., IndexWriter ) use this method to advance the stream to
the next token. |
static Set<?> |
makeDictionary(String[] dictionary)
Create a set of words from an array The resulting Set does case insensitive matching TODO We should look for a faster dictionary lookup approach. |
static Set<?> |
makeDictionary(Version matchVersion,
String[] dictionary)
|
protected static char[] |
makeLowerCaseCopy(char[] buffer)
|
void |
reset()
Reset the filter as well as the input TokenStream. |
Methods inherited from class org.apache.lucene.analysis.TokenFilter |
---|
close, end |
Methods inherited from class org.apache.lucene.util.AttributeSource |
---|
addAttribute, addAttributeImpl, captureState, clearAttributes, cloneAttributes, copyTo, equals, getAttribute, getAttributeClassesIterator, getAttributeFactory, getAttributeImplsIterator, hasAttribute, hasAttributes, hashCode, reflectAsString, reflectWith, restoreState, toString |
Methods inherited from class java.lang.Object |
---|
clone, finalize, getClass, notify, notifyAll, wait, wait, wait |
Field Detail |
---|
public static final int DEFAULT_MIN_WORD_SIZE
public static final int DEFAULT_MIN_SUBWORD_SIZE
public static final int DEFAULT_MAX_SUBWORD_SIZE
protected final CharArraySet dictionary
protected final LinkedList<Token> tokens
protected final int minWordSize
protected final int minSubwordSize
protected final int maxSubwordSize
protected final boolean onlyLongestMatch
Constructor Detail |
---|
@Deprecated protected CompoundWordTokenFilterBase(TokenStream input, String[] dictionary, int minWordSize, int minSubwordSize, int maxSubwordSize, boolean onlyLongestMatch)
CompoundWordTokenFilterBase(Version, TokenStream, String[], int, int, int, boolean)
instead
@Deprecated protected CompoundWordTokenFilterBase(TokenStream input, String[] dictionary, boolean onlyLongestMatch)
CompoundWordTokenFilterBase(Version, TokenStream, String[], boolean)
instead
@Deprecated protected CompoundWordTokenFilterBase(TokenStream input, Set<?> dictionary, boolean onlyLongestMatch)
CompoundWordTokenFilterBase(Version, TokenStream, Set, boolean)
instead
@Deprecated protected CompoundWordTokenFilterBase(TokenStream input, String[] dictionary)
CompoundWordTokenFilterBase(Version, TokenStream, String[])
instead
@Deprecated protected CompoundWordTokenFilterBase(TokenStream input, Set<?> dictionary)
CompoundWordTokenFilterBase(Version, TokenStream, Set)
instead
@Deprecated protected CompoundWordTokenFilterBase(TokenStream input, Set<?> dictionary, int minWordSize, int minSubwordSize, int maxSubwordSize, boolean onlyLongestMatch)
CompoundWordTokenFilterBase(Version, TokenStream, Set, int, int, int, boolean)
instead
protected CompoundWordTokenFilterBase(Version matchVersion, TokenStream input, String[] dictionary, int minWordSize, int minSubwordSize, int maxSubwordSize, boolean onlyLongestMatch)
protected CompoundWordTokenFilterBase(Version matchVersion, TokenStream input, String[] dictionary, boolean onlyLongestMatch)
protected CompoundWordTokenFilterBase(Version matchVersion, TokenStream input, Set<?> dictionary, boolean onlyLongestMatch)
protected CompoundWordTokenFilterBase(Version matchVersion, TokenStream input, String[] dictionary)
protected CompoundWordTokenFilterBase(Version matchVersion, TokenStream input, Set<?> dictionary)
protected CompoundWordTokenFilterBase(Version matchVersion, TokenStream input, Set<?> dictionary, int minWordSize, int minSubwordSize, int maxSubwordSize, boolean onlyLongestMatch)
Method Detail |
---|
public static final Set<?> makeDictionary(String[] dictionary)
dictionary
-
Set
of lowercased termspublic static final Set<?> makeDictionary(Version matchVersion, String[] dictionary)
public final boolean incrementToken() throws IOException
TokenStream
IndexWriter
) use this method to advance the stream to
the next token. Implementing classes must implement this method and update
the appropriate AttributeImpl
s with the attributes of the next
token.
The producer must make no assumptions about the attributes after the method
has been returned: the caller may arbitrarily change it. If the producer
needs to preserve the state for subsequent calls, it can use
AttributeSource.captureState()
to create a copy of the current attribute state.
This method is called for every token of a document, so an efficient
implementation is crucial for good performance. To avoid calls to
AttributeSource.addAttribute(Class)
and AttributeSource.getAttribute(Class)
,
references to all AttributeImpl
s that this stream uses should be
retrieved during instantiation.
To ensure that filters and consumers know which attributes are available,
the attributes must be added during instantiation. Filters and consumers
are not required to check for availability of attributes in
TokenStream.incrementToken()
.
incrementToken
in class TokenStream
IOException
protected static final void addAllLowerCase(CharArraySet target, Collection<?> col)
protected static char[] makeLowerCaseCopy(char[] buffer)
protected final Token createToken(int offset, int length, Token prototype)
protected void decompose(Token token)
protected abstract void decomposeInternal(Token token)
public void reset() throws IOException
TokenFilter
reset
in class TokenFilter
IOException
|
||||||||||
PREV CLASS NEXT CLASS | FRAMES NO FRAMES | |||||||||
SUMMARY: NESTED | FIELD | CONSTR | METHOD | DETAIL: FIELD | CONSTR | METHOD |