org.apache.lucene.analysis.synonym
Class SynonymFilter

java.lang.Object
  extended by org.apache.lucene.util.AttributeSource
      extended by org.apache.lucene.analysis.TokenStream
          extended by org.apache.lucene.analysis.TokenFilter
              extended by org.apache.lucene.analysis.synonym.SynonymFilter
All Implemented Interfaces:
Closeable

public final class SynonymFilter
extends org.apache.lucene.analysis.TokenFilter

Matches single or multi word synonyms in a token stream. This token stream cannot properly handle position increments != 1, ie, you should place this filter before filtering out stop words.

Note that with the current implementation, parsing is greedy, so whenever multiple parses would apply, the rule starting the earliest and parsing the most tokens wins. For example if you have these rules:

   a -> x
   a b -> y
   b c d -> z
 
Then input a b c d e parses to y b c d, ie the 2nd rule "wins" because it started earliest and matched the most input tokens of other rules starting at that point.

A future improvement to this filter could allow non-greedy parsing, such that the 3rd rule would win, and also separately allow multiple parses, such that all 3 rules would match, perhaps even on a rule by rule basis.

NOTE: when a match occurs, the output tokens associated with the matching rule are "stacked" on top of the input stream (if the rule had keepOrig=true) and also on top of aother matched rule's output tokens. This is not a correct solution, as really the output should be an abitrary graph/lattice. For example, with the above match, you would expect an exact PhraseQuery "y b c" to match the parsed tokens, but it will fail to do so. This limitations is necessary because Lucene's TokenStream (and index) cannot yet represent an arbitrary graph.

NOTE: If multiple incoming tokens arrive on the same position, only the first token at that position is used for parsing. Subsequent tokens simply pass through and are not parsed. A future improvement would be to allow these tokens to also be matched.


Nested Class Summary
 
Nested classes/interfaces inherited from class org.apache.lucene.util.AttributeSource
org.apache.lucene.util.AttributeSource.AttributeFactory, org.apache.lucene.util.AttributeSource.State
 
Field Summary
static String TYPE_SYNONYM
           
 
Fields inherited from class org.apache.lucene.analysis.TokenFilter
input
 
Constructor Summary
SynonymFilter(org.apache.lucene.analysis.TokenStream input, SynonymMap synonyms, boolean ignoreCase)
           
 
Method Summary
 boolean incrementToken()
           
 void reset()
           
 
Methods inherited from class org.apache.lucene.analysis.TokenFilter
close, end
 
Methods inherited from class org.apache.lucene.util.AttributeSource
addAttribute, addAttributeImpl, captureState, clearAttributes, cloneAttributes, copyTo, equals, getAttribute, getAttributeClassesIterator, getAttributeFactory, getAttributeImplsIterator, hasAttribute, hasAttributes, hashCode, reflectAsString, reflectWith, restoreState, toString
 
Methods inherited from class java.lang.Object
clone, finalize, getClass, notify, notifyAll, wait, wait, wait
 

Field Detail

TYPE_SYNONYM

public static final String TYPE_SYNONYM
See Also:
Constant Field Values
Constructor Detail

SynonymFilter

public SynonymFilter(org.apache.lucene.analysis.TokenStream input,
                     SynonymMap synonyms,
                     boolean ignoreCase)
Parameters:
input - input tokenstream
synonyms - synonym map
ignoreCase - case-folds input for matching with Character.toLowerCase(int). Note, if you set this to true, its your responsibility to lowercase the input entries when you create the SynonymMap
Method Detail

incrementToken

public boolean incrementToken()
                       throws IOException
Specified by:
incrementToken in class org.apache.lucene.analysis.TokenStream
Throws:
IOException

reset

public void reset()
           throws IOException
Overrides:
reset in class org.apache.lucene.analysis.TokenFilter
Throws:
IOException


Copyright © 2000-2011 Apache Software Foundation. All Rights Reserved.