StandardTokenizer instead, which has the same functionality.
This filter will be removed in Lucene 5.0@Deprecated public final class ChineseTokenizer extends Tokenizer
The difference between ChineseTokenizer and CJKTokenizer is that they have different token parsing logic.
For example, if the Chinese text "C1C2C3C4" is to be indexed:
Therefore the index created by CJKTokenizer is much larger.
The problem is that when searching for C1, C1C2, C1C3, C4C2, C1C2C3 ... the ChineseTokenizer works, but the CJKTokenizer will not work.
AttributeSource.AttributeFactory, AttributeSource.State| Constructor and Description |
|---|
ChineseTokenizer(AttributeSource.AttributeFactory factory,
Reader in)
Deprecated.
|
ChineseTokenizer(AttributeSource source,
Reader in)
Deprecated.
|
ChineseTokenizer(Reader in)
Deprecated.
|
| Modifier and Type | Method and Description |
|---|---|
void |
end()
Deprecated.
This method is called by the consumer after the last token has been
consumed, after
TokenStream.incrementToken() returned false
(using the new TokenStream API). |
boolean |
incrementToken()
Deprecated.
Consumers (i.e.,
IndexWriter) use this method to advance the stream to
the next token. |
void |
reset()
Deprecated.
Resets this stream to the beginning.
|
void |
reset(Reader input)
Deprecated.
Expert: Reset the tokenizer to a new reader.
|
close, correctOffsetaddAttribute, addAttributeImpl, captureState, clearAttributes, cloneAttributes, copyTo, equals, getAttribute, getAttributeClassesIterator, getAttributeFactory, getAttributeImplsIterator, hasAttribute, hasAttributes, hashCode, reflectAsString, reflectWith, restoreState, toStringpublic ChineseTokenizer(Reader in)
public ChineseTokenizer(AttributeSource source, Reader in)
public ChineseTokenizer(AttributeSource.AttributeFactory factory, Reader in)
public boolean incrementToken()
throws IOException
TokenStreamIndexWriter) use this method to advance the stream to
the next token. Implementing classes must implement this method and update
the appropriate AttributeImpls with the attributes of the next
token.
The producer must make no assumptions about the attributes after the method
has been returned: the caller may arbitrarily change it. If the producer
needs to preserve the state for subsequent calls, it can use
AttributeSource.captureState() to create a copy of the current attribute state.
This method is called for every token of a document, so an efficient
implementation is crucial for good performance. To avoid calls to
AttributeSource.addAttribute(Class) and AttributeSource.getAttribute(Class),
references to all AttributeImpls that this stream uses should be
retrieved during instantiation.
To ensure that filters and consumers know which attributes are available,
the attributes must be added during instantiation. Filters and consumers
are not required to check for availability of attributes in
TokenStream.incrementToken().
incrementToken in class TokenStreamIOExceptionpublic final void end()
TokenStreamTokenStream.incrementToken() returned false
(using the new TokenStream API). Streams implementing the old API
should upgrade to use this feature.
This method can be used to perform any end-of-stream operations, such as
setting the final offset of a stream. The final offset of a stream might
differ from the offset of the last token eg in case one or more whitespaces
followed after the last token, but a WhitespaceTokenizer was used.end in class TokenStreampublic void reset()
throws IOException
TokenStreamTokenStream.reset() is not needed for
the standard indexing process. However, if the tokens of a
TokenStream are intended to be consumed more than once, it is
necessary to implement TokenStream.reset(). Note that if your TokenStream
caches tokens and feeds them back again after a reset, it is imperative
that you clone the tokens when you store them away (on the first pass) as
well as when you return them (on future passes after TokenStream.reset()).reset in class TokenStreamIOExceptionpublic void reset(Reader input) throws IOException
Tokenizerreset in class TokenizerIOException