org.apache.lucene.analysis.path
Class PathHierarchyTokenizer
java.lang.Object
org.apache.lucene.util.AttributeSource
org.apache.lucene.analysis.TokenStream
org.apache.lucene.analysis.Tokenizer
org.apache.lucene.analysis.path.PathHierarchyTokenizer
- All Implemented Interfaces:
- Closeable
public class PathHierarchyTokenizer
- extends Tokenizer
Take something like:
/something/something/else
and make:
/something
/something/something
/something/something/else
Fields inherited from class org.apache.lucene.analysis.Tokenizer |
input |
Constructor Summary |
PathHierarchyTokenizer(Reader input)
|
PathHierarchyTokenizer(Reader input,
char delimiter,
char replacement)
|
PathHierarchyTokenizer(Reader input,
char delimiter,
char replacement,
int skip)
|
PathHierarchyTokenizer(Reader input,
int skip)
|
PathHierarchyTokenizer(Reader input,
int bufferSize,
char delimiter)
|
PathHierarchyTokenizer(Reader input,
int bufferSize,
char delimiter,
char replacement,
int skip)
|
Method Summary |
void |
end()
This method is called by the consumer after the last token has been
consumed, after TokenStream.incrementToken() returned false
(using the new TokenStream API). |
boolean |
incrementToken()
Consumers (i.e., IndexWriter ) use this method to advance the stream to
the next token. |
void |
reset(Reader input)
Expert: Reset the tokenizer to a new reader. |
Methods inherited from class org.apache.lucene.util.AttributeSource |
addAttribute, addAttributeImpl, captureState, clearAttributes, cloneAttributes, copyTo, equals, getAttribute, getAttributeClassesIterator, getAttributeFactory, getAttributeImplsIterator, hasAttribute, hasAttributes, hashCode, reflectAsString, reflectWith, restoreState, toString |
DEFAULT_DELIMITER
public static final char DEFAULT_DELIMITER
- See Also:
- Constant Field Values
DEFAULT_SKIP
public static final int DEFAULT_SKIP
- See Also:
- Constant Field Values
PathHierarchyTokenizer
public PathHierarchyTokenizer(Reader input)
PathHierarchyTokenizer
public PathHierarchyTokenizer(Reader input,
int skip)
PathHierarchyTokenizer
public PathHierarchyTokenizer(Reader input,
int bufferSize,
char delimiter)
PathHierarchyTokenizer
public PathHierarchyTokenizer(Reader input,
char delimiter,
char replacement)
PathHierarchyTokenizer
public PathHierarchyTokenizer(Reader input,
char delimiter,
char replacement,
int skip)
PathHierarchyTokenizer
public PathHierarchyTokenizer(Reader input,
int bufferSize,
char delimiter,
char replacement,
int skip)
incrementToken
public final boolean incrementToken()
throws IOException
- Description copied from class:
TokenStream
- Consumers (i.e.,
IndexWriter
) use this method to advance the stream to
the next token. Implementing classes must implement this method and update
the appropriate AttributeImpl
s with the attributes of the next
token.
The producer must make no assumptions about the attributes after the method
has been returned: the caller may arbitrarily change it. If the producer
needs to preserve the state for subsequent calls, it can use
AttributeSource.captureState()
to create a copy of the current attribute state.
This method is called for every token of a document, so an efficient
implementation is crucial for good performance. To avoid calls to
AttributeSource.addAttribute(Class)
and AttributeSource.getAttribute(Class)
,
references to all AttributeImpl
s that this stream uses should be
retrieved during instantiation.
To ensure that filters and consumers know which attributes are available,
the attributes must be added during instantiation. Filters and consumers
are not required to check for availability of attributes in
TokenStream.incrementToken()
.
- Specified by:
incrementToken
in class TokenStream
- Returns:
- false for end of stream; true otherwise
- Throws:
IOException
end
public final void end()
- Description copied from class:
TokenStream
- This method is called by the consumer after the last token has been
consumed, after
TokenStream.incrementToken()
returned false
(using the new TokenStream
API). Streams implementing the old API
should upgrade to use this feature.
This method can be used to perform any end-of-stream operations, such as
setting the final offset of a stream. The final offset of a stream might
differ from the offset of the last token eg in case one or more whitespaces
followed after the last token, but a WhitespaceTokenizer
was used.
- Overrides:
end
in class TokenStream
reset
public void reset(Reader input)
throws IOException
- Description copied from class:
Tokenizer
- Expert: Reset the tokenizer to a new reader. Typically, an
analyzer (in its reusableTokenStream method) will use
this to re-use a previously created tokenizer.
- Overrides:
reset
in class Tokenizer
- Throws:
IOException
Copyright © 2000-2011 Apache Software Foundation. All Rights Reserved.