org.apache.lucene.analysis.standard
Class UAX29URLEmailTokenizer

java.lang.Object
  extended by org.apache.lucene.util.AttributeSource
      extended by org.apache.lucene.analysis.TokenStream
          extended by org.apache.lucene.analysis.Tokenizer
              extended by org.apache.lucene.analysis.standard.UAX29URLEmailTokenizer
All Implemented Interfaces:
Closeable

public final class UAX29URLEmailTokenizer
extends Tokenizer

This class implements Word Break rules from the Unicode Text Segmentation algorithm, as specified in Unicode Standard Annex #29 URLs and email addresses are also tokenized according to the relevant RFCs.

Tokens produced are of the following types:


Nested Class Summary
 
Nested classes/interfaces inherited from class org.apache.lucene.util.AttributeSource
AttributeSource.AttributeFactory, AttributeSource.State
 
Field Summary
static String EMAIL_TYPE
          E-mail addresses
static String HANGUL_TYPE
           
static String HIRAGANA_TYPE
           
static String IDEOGRAPHIC_TYPE
           
static String KATAKANA_TYPE
           
static String NUMERIC_TYPE
          Numbers
static String SOUTH_EAST_ASIAN_TYPE
          Chars in class \p{Line_Break = Complex_Context} are from South East Asian scripts (Thai, Lao, Myanmar, Khmer, etc.).
static String URL_TYPE
          URLs with scheme: HTTP(S), FTP, or FILE; no-scheme URLs match HTTP syntax
static String WORD_TYPE
          Alphanumeric sequences
 
Fields inherited from class org.apache.lucene.analysis.Tokenizer
input
 
Constructor Summary
UAX29URLEmailTokenizer(AttributeSource.AttributeFactory factory, Reader input)
           
UAX29URLEmailTokenizer(AttributeSource source, Reader input)
           
UAX29URLEmailTokenizer(InputStream in)
          Creates a new scanner.
UAX29URLEmailTokenizer(Reader in)
          Creates a new scanner There is also a java.io.InputStream version of this constructor.
 
Method Summary
 void end()
          This method is called by the consumer after the last token has been consumed, after TokenStream.incrementToken() returned false (using the new TokenStream API).
 int getMaxTokenLength()
          Returns the max allowed token length.
 boolean incrementToken()
          Consumers (i.e., IndexWriter) use this method to advance the stream to the next token.
 void reset(Reader reader)
          Expert: Reset the tokenizer to a new reader.
 void setMaxTokenLength(int length)
          Set the max allowed token length.
 
Methods inherited from class org.apache.lucene.analysis.Tokenizer
close, correctOffset
 
Methods inherited from class org.apache.lucene.analysis.TokenStream
reset
 
Methods inherited from class org.apache.lucene.util.AttributeSource
addAttribute, addAttributeImpl, captureState, clearAttributes, cloneAttributes, copyTo, equals, getAttribute, getAttributeClassesIterator, getAttributeFactory, getAttributeImplsIterator, hasAttribute, hasAttributes, hashCode, reflectAsString, reflectWith, restoreState, toString
 
Methods inherited from class java.lang.Object
clone, finalize, getClass, notify, notifyAll, wait, wait, wait
 

Field Detail

WORD_TYPE

public static final String WORD_TYPE
Alphanumeric sequences


NUMERIC_TYPE

public static final String NUMERIC_TYPE
Numbers


URL_TYPE

public static final String URL_TYPE
URLs with scheme: HTTP(S), FTP, or FILE; no-scheme URLs match HTTP syntax

See Also:
Constant Field Values

EMAIL_TYPE

public static final String EMAIL_TYPE
E-mail addresses

See Also:
Constant Field Values

SOUTH_EAST_ASIAN_TYPE

public static final String SOUTH_EAST_ASIAN_TYPE
Chars in class \p{Line_Break = Complex_Context} are from South East Asian scripts (Thai, Lao, Myanmar, Khmer, etc.). Sequences of these are kept together as as a single token rather than broken up, because the logic required to break them at word boundaries is too complex for UAX#29.

See Unicode Line Breaking Algorithm: http://www.unicode.org/reports/tr14/#SA


IDEOGRAPHIC_TYPE

public static final String IDEOGRAPHIC_TYPE

HIRAGANA_TYPE

public static final String HIRAGANA_TYPE

KATAKANA_TYPE

public static final String KATAKANA_TYPE

HANGUL_TYPE

public static final String HANGUL_TYPE
Constructor Detail

UAX29URLEmailTokenizer

public UAX29URLEmailTokenizer(AttributeSource source,
                              Reader input)
Parameters:
source - The AttributeSource to use
input - The input reader

UAX29URLEmailTokenizer

public UAX29URLEmailTokenizer(AttributeSource.AttributeFactory factory,
                              Reader input)
Parameters:
factory - The AttributeFactory to use
input - The input reader

UAX29URLEmailTokenizer

public UAX29URLEmailTokenizer(Reader in)
Creates a new scanner There is also a java.io.InputStream version of this constructor.

Parameters:
in - the java.io.Reader to read input from.

UAX29URLEmailTokenizer

public UAX29URLEmailTokenizer(InputStream in)
Creates a new scanner. There is also java.io.Reader version of this constructor.

Parameters:
in - the java.io.Inputstream to read input from.
Method Detail

setMaxTokenLength

public void setMaxTokenLength(int length)
Set the max allowed token length. Any token longer than this is skipped.

Parameters:
length - the new max allowed token length

getMaxTokenLength

public int getMaxTokenLength()
Returns the max allowed token length. Any token longer than this is skipped.

Returns:
the max allowed token length

end

public final void end()
Description copied from class: TokenStream
This method is called by the consumer after the last token has been consumed, after TokenStream.incrementToken() returned false (using the new TokenStream API). Streams implementing the old API should upgrade to use this feature.

This method can be used to perform any end-of-stream operations, such as setting the final offset of a stream. The final offset of a stream might differ from the offset of the last token eg in case one or more whitespaces followed after the last token, but a WhitespaceTokenizer was used.

Overrides:
end in class TokenStream

reset

public void reset(Reader reader)
           throws IOException
Description copied from class: Tokenizer
Expert: Reset the tokenizer to a new reader. Typically, an analyzer (in its reusableTokenStream method) will use this to re-use a previously created tokenizer.

Overrides:
reset in class Tokenizer
Throws:
IOException

incrementToken

public final boolean incrementToken()
                             throws IOException
Description copied from class: TokenStream
Consumers (i.e., IndexWriter) use this method to advance the stream to the next token. Implementing classes must implement this method and update the appropriate AttributeImpls with the attributes of the next token.

The producer must make no assumptions about the attributes after the method has been returned: the caller may arbitrarily change it. If the producer needs to preserve the state for subsequent calls, it can use AttributeSource.captureState() to create a copy of the current attribute state.

This method is called for every token of a document, so an efficient implementation is crucial for good performance. To avoid calls to AttributeSource.addAttribute(Class) and AttributeSource.getAttribute(Class), references to all AttributeImpls that this stream uses should be retrieved during instantiation.

To ensure that filters and consumers know which attributes are available, the attributes must be added during instantiation. Filters and consumers are not required to check for availability of attributes in TokenStream.incrementToken().

Specified by:
incrementToken in class TokenStream
Returns:
false for end of stream; true otherwise
Throws:
IOException


Copyright © 2000-2011 Apache Software Foundation. All Rights Reserved.