org.apache.lucene.analysis.cn
StandardTokenizer instead, which has the same functionality.
This filter will be removed in Lucene 5.0@Deprecated public final class ChineseTokenizer extends Tokenizer
The difference between ChineseTokenizer and CJKTokenizer is that they have different token parsing logic.
For example, if the Chinese text "C1C2C3C4" is to be indexed:
Therefore the index created by CJKTokenizer is much larger.
The problem is that when searching for C1, C1C2, C1C3, C4C2, C1C2C3 ... the ChineseTokenizer works, but the CJKTokenizer will not work.
AttributeSource.AttributeFactory, AttributeSource.State| Constructor and Description |
|---|
ChineseTokenizer(AttributeSource.AttributeFactory factory,
Reader in)
Deprecated.
|
ChineseTokenizer(Reader in)
Deprecated.
|
| Modifier and Type | Method and Description |
|---|---|
void |
end()
Deprecated.
|
boolean |
incrementToken()
Deprecated.
|
void |
reset()
Deprecated.
|
close, correctOffset, setReaderaddAttribute, addAttributeImpl, captureState, clearAttributes, cloneAttributes, copyTo, equals, getAttribute, getAttributeClassesIterator, getAttributeFactory, getAttributeImplsIterator, hasAttribute, hasAttributes, hashCode, reflectAsString, reflectWith, restoreState, toStringpublic ChineseTokenizer(Reader in)
public ChineseTokenizer(AttributeSource.AttributeFactory factory, Reader in)
public boolean incrementToken()
throws IOException
incrementToken in class TokenStreamIOExceptionpublic final void end()
throws IOException
end in class TokenStreamIOExceptionpublic void reset()
throws IOException
reset in class TokenizerIOExceptionCopyright © 2000-2014 Apache Software Foundation. All Rights Reserved.