Deprecated API


Contents
Deprecated Interfaces
org.apache.lucene.search.Searchable
          In 4.0 this interface is removed/absorbed into IndexSearcher 
org.apache.lucene.analysis.tokenattributes.TermAttribute
          Use CharTermAttribute instead. 
 

Deprecated Classes
org.apache.lucene.analysis.CharArraySet.CharArraySetIterator
          Use the standard iterator, which returns char[] instances. 
org.apache.lucene.document.DateField
          If you build a new index, use DateTools or NumericField instead. This class is included for use with existing indices and will be removed in a future release (possibly Lucene 4.0). 
org.apache.lucene.search.FilterManager
          used by remote package which is deprecated as well. You should use CachingWrapperFilter if you wish to cache Filters. 
org.apache.lucene.index.IndexWriter.MaxFieldLength
          use LimitTokenCountAnalyzer instead. 
org.apache.lucene.analysis.ISOLatin1AccentFilter
          If you build a new index, use ASCIIFoldingFilter which covers a superset of Latin 1. This class is included for use with existing indexes and will be removed in a future release (possibly Lucene 4.0). 
org.apache.lucene.search.MultiSearcher
          If you are using MultiSearcher over IndexSearchers, please use MultiReader instead; this class does not properly handle certain kinds of queries (see LUCENE-2756). 
org.apache.lucene.document.NumberTools
          For new indexes use NumericUtils instead, which provides a sortable binary representation (prefix encoded) of numeric values. To index and efficiently query numeric values use NumericField and NumericRangeQuery. This class is included for use with existing indices and will be removed in a future release (possibly Lucene 4.0). 
org.apache.lucene.search.ParallelMultiSearcher
          Please pass an ExecutorService to IndexSearcher, instead. 
org.apache.lucene.util.Parameter
          Use Java 5 enum, will be removed in a later Lucene 3.x release. 
org.apache.lucene.search.Searcher
          In 4.0 this abstract class is removed/absorbed into IndexSearcher 
org.apache.lucene.search.SimilarityDelegator
          this class will be removed in 4.0. Please subclass Similarity or DefaultSimilarity instead. 
org.apache.lucene.analysis.standard.std31.StandardTokenizerImpl31
          This class is only for exact backwards compatibility 
org.apache.lucene.analysis.tokenattributes.TermAttributeImpl
          This class is not used anymore. The backwards layer in AttributeFactory uses the replacement implementation. 
org.apache.lucene.analysis.standard.std31.UAX29URLEmailTokenizerImpl31
          This class is only for exact backwards compatibility 
 

Deprecated Fields
org.apache.lucene.analysis.standard.StandardTokenizer.ACRONYM
            
org.apache.lucene.analysis.standard.ClassicTokenizer.ACRONYM_DEP
          this solves a bug where HOSTs that end with '.' are identified as ACRONYMs. 
org.apache.lucene.analysis.standard.StandardTokenizer.ACRONYM_DEP
          this solves a bug where HOSTs that end with '.' are identified as ACRONYMs. 
org.apache.lucene.analysis.standard.StandardTokenizer.APOSTROPHE
            
org.apache.lucene.analysis.standard.StandardTokenizer.CJ
            
org.apache.lucene.analysis.standard.StandardTokenizer.COMPANY
            
org.apache.lucene.index.IndexWriter.DEFAULT_MAX_BUFFERED_DELETE_TERMS
          use IndexWriterConfig.DEFAULT_MAX_BUFFERED_DELETE_TERMS instead 
org.apache.lucene.index.IndexWriter.DEFAULT_MAX_BUFFERED_DOCS
          use IndexWriterConfig.DEFAULT_MAX_BUFFERED_DOCS instead. 
org.apache.lucene.index.IndexWriter.DEFAULT_MAX_FIELD_LENGTH
          see IndexWriterConfig 
org.apache.lucene.index.IndexWriter.DEFAULT_RAM_BUFFER_SIZE_MB
          use IndexWriterConfig.DEFAULT_RAM_BUFFER_SIZE_MB instead. 
org.apache.lucene.index.IndexWriter.DEFAULT_TERM_INDEX_INTERVAL
          use IndexWriterConfig.DEFAULT_TERM_INDEX_INTERVAL instead. 
org.apache.lucene.index.IndexWriter.DISABLE_AUTO_FLUSH
          use IndexWriterConfig.DISABLE_AUTO_FLUSH instead 
org.apache.lucene.analysis.standard.UAX29URLEmailTokenizer.EMAIL_TYPE
          use UAX29URLEmailTokenizer.TOKEN_TYPES instead 
org.apache.lucene.util.AttributeImpl.enableBackwards
          this will be removed in Lucene 4.0. 
org.apache.lucene.analysis.standard.UAX29URLEmailTokenizer.HANGUL_TYPE
          use UAX29URLEmailTokenizer.TOKEN_TYPES instead 
org.apache.lucene.analysis.standard.UAX29URLEmailTokenizer.HIRAGANA_TYPE
          use UAX29URLEmailTokenizer.TOKEN_TYPES instead 
org.apache.lucene.analysis.standard.StandardTokenizer.HOST
            
org.apache.lucene.analysis.standard.UAX29URLEmailTokenizer.IDEOGRAPHIC_TYPE
          use UAX29URLEmailTokenizer.TOKEN_TYPES instead 
org.apache.lucene.util.Constants.JAVA_1_1
          This constant is useless since Lucene is on Java 5 
org.apache.lucene.util.Constants.JAVA_1_2
          This constant is useless since Lucene is on Java 5 
org.apache.lucene.util.Constants.JAVA_1_3
          This constant is useless since Lucene is on Java 5 
org.apache.lucene.analysis.standard.UAX29URLEmailTokenizer.KATAKANA_TYPE
          use UAX29URLEmailTokenizer.TOKEN_TYPES instead 
org.apache.lucene.analysis.standard.UAX29URLEmailTokenizer.NUMERIC_TYPE
          use UAX29URLEmailTokenizer.TOKEN_TYPES instead 
org.apache.lucene.analysis.standard.UAX29URLEmailTokenizer.SOUTH_EAST_ASIAN_TYPE
          use UAX29URLEmailTokenizer.TOKEN_TYPES instead 
org.apache.lucene.analysis.standard.UAX29URLEmailTokenizer.URL_TYPE
          use UAX29URLEmailTokenizer.TOKEN_TYPES instead 
org.apache.lucene.analysis.standard.UAX29URLEmailTokenizer.WORD_TYPE
          use UAX29URLEmailTokenizer.TOKEN_TYPES instead 
org.apache.lucene.index.IndexWriter.WRITE_LOCK_TIMEOUT
          use IndexWriterConfig.WRITE_LOCK_TIMEOUT instead 
 

Deprecated Methods
org.apache.lucene.analysis.PerFieldAnalyzerWrapper.addAnalyzer(String, Analyzer)
          Changing the Analyzer for a field after instantiation prevents reusability. Analyzers for fields should be set during construction. 
org.apache.lucene.index.IndexWriter.addIndexesNoOptimize(Directory...)
          use IndexWriter.addIndexes(Directory...) instead 
org.apache.lucene.search.MultiTermQuery.clearTotalNumberOfTerms()
          Don't use this method, as its not thread safe and useless. 
org.apache.lucene.search.MultiTermQueryWrapperFilter.clearTotalNumberOfTerms()
          Don't use this method, as its not thread safe and useless. 
org.apache.lucene.store.Directory.copy(Directory, Directory, boolean)
          should be replaced with calls to Directory.copy(Directory, String, String) for every file that needs copying. You can use the following code:
 IndexFileNameFilter filter = IndexFileNameFilter.getFilter();
 for (String file : src.listAll()) {
   if (filter.accept(null, file)) {
     src.copy(dest, file, file);
   }
 }
 
 
org.apache.lucene.analysis.CharArraySet.copy(Set)
          use CharArraySet.copy(Version, Set) instead. 
org.apache.lucene.search.Searcher.createWeight(Query)
          never ever use this method in Weight implementations. Subclasses of Searcher should use Searcher.createNormalizedWeight(org.apache.lucene.search.Query), instead. 
org.apache.lucene.util.IndexableBinaryStringTools.decode(CharBuffer)
          Use IndexableBinaryStringTools.decode(char[], int, int, byte[], int, int) instead. This method will be removed in Lucene 4.0 
org.apache.lucene.util.IndexableBinaryStringTools.decode(CharBuffer, ByteBuffer)
          Use IndexableBinaryStringTools.decode(char[], int, int, byte[], int, int) instead. This method will be removed in Lucene 4.0 
org.apache.lucene.search.Similarity.decodeNorm(byte)
          Use Similarity.decodeNormValue(byte) instead. 
org.apache.lucene.util.IndexableBinaryStringTools.encode(ByteBuffer)
          Use IndexableBinaryStringTools.encode(byte[], int, int, char[], int, int) instead. This method will be removed in Lucene 4.0 
org.apache.lucene.util.IndexableBinaryStringTools.encode(ByteBuffer, CharBuffer)
          Use IndexableBinaryStringTools.encode(byte[], int, int, char[], int, int) instead. This method will be removed in Lucene 4.0 
org.apache.lucene.search.Similarity.encodeNorm(float)
          Use Similarity.encodeNormValue(float) instead. 
org.apache.lucene.index.IndexWriter.expungeDeletes()
            
org.apache.lucene.index.IndexWriter.expungeDeletes(boolean)
            
org.apache.lucene.queryParser.CharStream.getColumn()
            
org.apache.lucene.util.IndexableBinaryStringTools.getDecodedLength(CharBuffer)
          Use IndexableBinaryStringTools.getDecodedLength(char[], int, int) instead. This method will be removed in Lucene 4.0 
org.apache.lucene.index.IndexWriter.getDefaultWriteLockTimeout()
          use IndexWriterConfig.getDefaultWriteLockTimeout() instead 
org.apache.lucene.analysis.StopFilter.getEnablePositionIncrementsVersionDefault(Version)
          use StopFilter.StopFilter(Version, TokenStream, Set) instead 
org.apache.lucene.util.IndexableBinaryStringTools.getEncodedLength(ByteBuffer)
          Use IndexableBinaryStringTools.getEncodedLength(byte[], int, int) instead. This method will be removed in Lucene 4.0 
org.apache.lucene.document.Document.getField(String)
          use Document.getFieldable(java.lang.String) instead and cast depending on data type. 
org.apache.lucene.queryParser.QueryParser.getFieldQuery(String, String)
          Use QueryParser.getFieldQuery(String,String,boolean) instead. 
org.apache.lucene.document.Document.getFields(String)
          use Document.getFieldable(java.lang.String) instead and cast depending on data type. 
org.apache.lucene.store.FSDirectory.getFile()
          Use FSDirectory.getDirectory() instead. 
org.apache.lucene.queryParser.CharStream.getLine()
            
org.apache.lucene.index.IndexWriter.getMaxBufferedDeleteTerms()
          use IndexWriterConfig.getMaxBufferedDeleteTerms() instead 
org.apache.lucene.index.IndexWriter.getMaxBufferedDocs()
          use IndexWriterConfig.getMaxBufferedDocs() instead. 
org.apache.lucene.index.IndexWriter.getMaxFieldLength()
          use LimitTokenCountAnalyzer to limit number of tokens. 
org.apache.lucene.index.IndexWriter.getMaxMergeDocs()
          use LogMergePolicy.getMaxMergeDocs() directly. 
org.apache.lucene.index.LogByteSizeMergePolicy.getMaxMergeMBForOptimize()
          Renamed to LogByteSizeMergePolicy.getMaxMergeMBForForcedMerge() 
org.apache.lucene.index.IndexWriter.getMergedSegmentWarmer()
          use IndexWriterConfig.getMergedSegmentWarmer() instead. 
org.apache.lucene.index.IndexWriter.getMergeFactor()
          use LogMergePolicy.getMergeFactor() directly. 
org.apache.lucene.index.IndexWriter.getMergePolicy()
          use IndexWriterConfig.getMergePolicy() instead 
org.apache.lucene.index.IndexWriter.getMergeScheduler()
          use IndexWriterConfig.getMergeScheduler() instead 
org.apache.lucene.search.Similarity.getNormDecoder()
          Use instance methods for encoding/decoding norm values to enable customization. 
org.apache.lucene.document.AbstractField.getOmitTermFreqAndPositions()
          use AbstractField.getIndexOptions() instead. 
org.apache.lucene.index.IndexWriter.getRAMBufferSizeMB()
          use IndexWriterConfig.getRAMBufferSizeMB() instead. 
org.apache.lucene.index.IndexWriter.getReader()
          Please use IndexReader.open(IndexWriter,boolean) instead. 
org.apache.lucene.index.IndexWriter.getReader(int)
          Please use IndexReader.open(IndexWriter,boolean) instead. Furthermore, this method cannot guarantee the reader (and its sub-readers) will be opened with the termInfosIndexDivisor setting because some of them may have already been opened according to IndexWriterConfig.setReaderTermsIndexDivisor(int). You should set the requested termInfosIndexDivisor through IndexWriterConfig.setReaderTermsIndexDivisor(int) and use IndexWriter.getReader(). 
org.apache.lucene.index.IndexWriter.getReaderTermsIndexDivisor()
          use IndexWriterConfig.getReaderTermsIndexDivisor() instead. 
org.apache.lucene.index.IndexWriter.getSimilarity()
          use IndexWriterConfig.getSimilarity() instead 
org.apache.lucene.search.Scorer.getSimilarity()
          Store any Similarity you might need privately in your implementation instead. 
org.apache.lucene.search.Query.getSimilarity(Searcher)
          Instead of using "runtime" subclassing/delegation, subclass the Weight instead. 
org.apache.lucene.index.IndexWriter.getTermIndexInterval()
          use IndexWriterConfig.getTermIndexInterval() 
org.apache.lucene.search.MultiTermQuery.getTotalNumberOfTerms()
          Don't use this method, as its not thread safe and useless. 
org.apache.lucene.search.MultiTermQueryWrapperFilter.getTotalNumberOfTerms()
          Don't use this method, as its not thread safe and useless. 
org.apache.lucene.index.IndexWriter.getUseCompoundFile()
          use LogMergePolicy.getUseCompoundFile() 
org.apache.lucene.index.IndexWriter.getWriteLockTimeout()
          use IndexWriterConfig.getWriteLockTimeout() 
org.apache.lucene.search.MultiTermQuery.incTotalNumberOfTerms(int)
          Don't use this method, as its not thread safe and useless. 
org.apache.lucene.index.FilterIndexReader.isOptimized()
           
org.apache.lucene.index.MultiReader.isOptimized()
           
org.apache.lucene.index.IndexReader.isOptimized()
          Check segment count using IndexReader.getSequentialSubReaders() instead. 
org.apache.lucene.index.ParallelReader.isOptimized()
           
org.apache.lucene.analysis.standard.ClassicTokenizer.isReplaceInvalidAcronym()
          Remove in 3.X and make true the only valid value 
org.apache.lucene.analysis.standard.StandardTokenizer.isReplaceInvalidAcronym()
          Remove in 3.X and make true the only valid value 
org.apache.lucene.analysis.CharTokenizer.isTokenChar(char)
          use CharTokenizer.isTokenChar(int) instead. This method will be removed in Lucene 4.0. 
org.apache.lucene.search.Similarity.lengthNorm(String, int)
          Please override computeNorm instead 
org.apache.lucene.analysis.StopFilter.makeStopSet(List)
          use StopFilter.makeStopSet(Version, List) instead 
org.apache.lucene.analysis.StopFilter.makeStopSet(List, boolean)
          use StopFilter.makeStopSet(Version, List, boolean) instead 
org.apache.lucene.analysis.StopFilter.makeStopSet(String...)
          use StopFilter.makeStopSet(Version, String...) instead 
org.apache.lucene.analysis.StopFilter.makeStopSet(String[], boolean)
          use StopFilter.makeStopSet(Version, String[], boolean) instead; 
org.apache.lucene.analysis.CharTokenizer.normalize(char)
          use CharTokenizer.normalize(int) instead. This method will be removed in Lucene 4.0. 
org.apache.lucene.index.IndexWriter.optimize()
            
org.apache.lucene.index.IndexWriter.optimize(boolean)
            
org.apache.lucene.index.IndexWriter.optimize(int)
            
org.apache.lucene.index.SegmentInfos.range(int, int)
          use asList().subList(first, last) instead. 
org.apache.lucene.store.DataInput.readChars(char[], int, int)
          -- please use readString or readBytes instead, and construct the string from those utf8 bytes 
org.apache.lucene.index.IndexReader.reopen()
          Use IndexReader#openIfChanged(IndexReader) instead 
org.apache.lucene.index.IndexReader.reopen(boolean)
          Use IndexReader#openIfChanged(IndexReader,boolean) instead 
org.apache.lucene.index.IndexReader.reopen(IndexCommit)
          Use IndexReader#openIfChanged(IndexReader,IndexCommit) instead 
org.apache.lucene.index.IndexReader.reopen(IndexWriter, boolean)
          Use IndexReader#openIfChanged(IndexReader,IndexReader,boolean) instead 
org.apache.lucene.analysis.tokenattributes.CharTermAttributeImpl.resizeTermBuffer(int)
           
org.apache.lucene.index.IndexWriter.setDefaultWriteLockTimeout(long)
          use IndexWriterConfig.setDefaultWriteLockTimeout(long) instead 
org.apache.lucene.index.IndexWriter.setMaxBufferedDeleteTerms(int)
          use IndexWriterConfig.setMaxBufferedDeleteTerms(int) instead. 
org.apache.lucene.index.IndexWriter.setMaxBufferedDocs(int)
          use IndexWriterConfig.setMaxBufferedDocs(int) instead. 
org.apache.lucene.index.IndexWriter.setMaxFieldLength(int)
          use LimitTokenCountAnalyzer instead. Note that the behvaior slightly changed - the analyzer limits the number of tokens per token stream created, while this setting limits the total number of tokens to index. This only matters if you index many multi-valued fields though. 
org.apache.lucene.index.IndexWriter.setMaxMergeDocs(int)
          use LogMergePolicy.setMaxMergeDocs(int) directly. 
org.apache.lucene.index.LogByteSizeMergePolicy.setMaxMergeMBForOptimize(double)
          Renamed to LogByteSizeMergePolicy.setMaxMergeMBForForcedMerge(double) 
org.apache.lucene.index.IndexWriter.setMergedSegmentWarmer(IndexWriter.IndexReaderWarmer)
          use IndexWriterConfig.setMergedSegmentWarmer(org.apache.lucene.index.IndexWriter.IndexReaderWarmer) instead. 
org.apache.lucene.index.IndexWriter.setMergeFactor(int)
          use LogMergePolicy.setMergeFactor(int) directly. 
org.apache.lucene.index.IndexWriter.setMergePolicy(MergePolicy)
          use IndexWriterConfig.setMergePolicy(MergePolicy) instead. 
org.apache.lucene.index.IndexWriter.setMergeScheduler(MergeScheduler)
          use IndexWriterConfig.setMergeScheduler(MergeScheduler) instead 
org.apache.lucene.index.IndexReader.setNorm(int, String, float)
          Use IndexReader.setNorm(int, String, byte) instead, encoding the float to byte with your Similarity's Similarity.encodeNormValue(float). This method will be removed in Lucene 4.0 
org.apache.lucene.document.AbstractField.setOmitTermFreqAndPositions(boolean)
          use AbstractField.setIndexOptions(FieldInfo.IndexOptions) instead. 
org.apache.lucene.index.IndexWriter.setRAMBufferSizeMB(double)
          use IndexWriterConfig.setRAMBufferSizeMB(double) instead. 
org.apache.lucene.index.IndexWriter.setReaderTermsIndexDivisor(int)
          use IndexWriterConfig.setReaderTermsIndexDivisor(int) instead. 
org.apache.lucene.analysis.standard.ClassicTokenizer.setReplaceInvalidAcronym(boolean)
          Remove in 3.X and make true the only valid value See https://issues.apache.org/jira/browse/LUCENE-1068 
org.apache.lucene.analysis.standard.StandardTokenizer.setReplaceInvalidAcronym(boolean)
          Remove in 3.X and make true the only valid value See https://issues.apache.org/jira/browse/LUCENE-1068 
org.apache.lucene.index.IndexWriter.setSimilarity(Similarity)
          use IndexWriterConfig.setSimilarity(Similarity) instead 
org.apache.lucene.analysis.tokenattributes.CharTermAttributeImpl.setTermBuffer(char[], int, int)
           
org.apache.lucene.analysis.tokenattributes.CharTermAttributeImpl.setTermBuffer(String)
           
org.apache.lucene.analysis.tokenattributes.CharTermAttributeImpl.setTermBuffer(String, int, int)
           
org.apache.lucene.index.IndexWriter.setTermIndexInterval(int)
          use IndexWriterConfig.setTermIndexInterval(int) 
org.apache.lucene.analysis.tokenattributes.CharTermAttributeImpl.setTermLength(int)
           
org.apache.lucene.index.ConcurrentMergeScheduler.setTestMode()
          this test mode code will be removed in a future release 
org.apache.lucene.index.IndexWriter.setUseCompoundFile(boolean)
          use LogMergePolicy.setUseCompoundFile(boolean). 
org.apache.lucene.index.IndexWriter.setWriteLockTimeout(long)
          use IndexWriterConfig.setWriteLockTimeout(long) instead 
org.apache.lucene.store.IndexInput.skipChars(int)
          this method operates on old "modified utf8" encoded strings 
org.apache.lucene.analysis.CharArraySet.stringIterator()
          Use CharArraySet.iterator(), which returns char[] instances. 
org.apache.lucene.store.Directory.sync(String)
          use Directory.sync(Collection) instead. For easy migration you can change your code to call sync(Collections.singleton(name)) 
org.apache.lucene.store.FileSwitchDirectory.sync(String)
           
org.apache.lucene.store.FSDirectory.sync(String)
           
org.apache.lucene.analysis.tokenattributes.CharTermAttributeImpl.term()
           
org.apache.lucene.analysis.tokenattributes.CharTermAttributeImpl.termBuffer()
           
org.apache.lucene.analysis.tokenattributes.CharTermAttributeImpl.termLength()
           
org.apache.lucene.store.Directory.touchFile(String)
          Lucene never uses this API; it will be removed in 4.0. 
org.apache.lucene.store.FileSwitchDirectory.touchFile(String)
           
org.apache.lucene.store.RAMDirectory.touchFile(String)
          Lucene never uses this API; it will be removed in 4.0. 
org.apache.lucene.store.FSDirectory.touchFile(String)
          Lucene never uses this API; it will be removed in 4.0. 
org.apache.lucene.store.NRTCachingDirectory.touchFile(String)
           
org.apache.lucene.search.Query.weight(Searcher)
          never ever use this method in Weight implementations. Subclasses of Query should use Query.createWeight(org.apache.lucene.search.Searcher), instead. 
org.apache.lucene.store.DataOutput.writeChars(char[], int, int)
          -- please pre-convert to utf8 bytes instead or use DataOutput.writeString(java.lang.String) 
org.apache.lucene.store.DataOutput.writeChars(String, int, int)
          -- please pre-convert to utf8 bytes instead or use DataOutput.writeString(java.lang.String) 
 

Deprecated Constructors
org.apache.lucene.util.ArrayUtil()
          This constructor was not intended to be public and should not be used. This class contains solely a static utility methods. It will be made private in Lucene 4.0 
org.apache.lucene.store.BufferedIndexInput()
          please pass resourceDesc 
org.apache.lucene.store.BufferedIndexInput(int)
          please pass resourceDesc 
org.apache.lucene.analysis.CharArraySet(Collection, boolean)
          use CharArraySet.CharArraySet(Version, Collection, boolean) instead 
org.apache.lucene.analysis.CharArraySet(int, boolean)
          use CharArraySet.CharArraySet(Version, int, boolean) instead 
org.apache.lucene.analysis.CharTokenizer(AttributeSource.AttributeFactory, Reader)
          use CharTokenizer.CharTokenizer(Version, AttributeSource.AttributeFactory, Reader) instead. This will be removed in Lucene 4.0. 
org.apache.lucene.analysis.CharTokenizer(AttributeSource, Reader)
          use CharTokenizer.CharTokenizer(Version, AttributeSource, Reader) instead. This will be removed in Lucene 4.0. 
org.apache.lucene.analysis.CharTokenizer(Reader)
          use CharTokenizer.CharTokenizer(Version, Reader) instead. This will be removed in Lucene 4.0. 
org.apache.lucene.document.Field(String, byte[], Field.Store)
          Use instead 
org.apache.lucene.document.Field(String, byte[], int, int, Field.Store)
          Use instead 
org.apache.lucene.store.IndexInput()
          please pass resourceDescription 
org.apache.lucene.search.IndexSearcher(Directory)
          use IndexSearcher.IndexSearcher(IndexReader) instead. 
org.apache.lucene.search.IndexSearcher(Directory, boolean)
          Use IndexSearcher.IndexSearcher(IndexReader) instead. 
org.apache.lucene.index.IndexWriter(Directory, Analyzer, boolean, IndexDeletionPolicy, IndexWriter.MaxFieldLength)
          use IndexWriter.IndexWriter(Directory, IndexWriterConfig) instead 
org.apache.lucene.index.IndexWriter(Directory, Analyzer, boolean, IndexWriter.MaxFieldLength)
          use IndexWriter.IndexWriter(Directory, IndexWriterConfig) instead 
org.apache.lucene.index.IndexWriter(Directory, Analyzer, IndexDeletionPolicy, IndexWriter.MaxFieldLength)
          use IndexWriter.IndexWriter(Directory, IndexWriterConfig) instead 
org.apache.lucene.index.IndexWriter(Directory, Analyzer, IndexDeletionPolicy, IndexWriter.MaxFieldLength, IndexCommit)
          use IndexWriter.IndexWriter(Directory, IndexWriterConfig) instead 
org.apache.lucene.index.IndexWriter(Directory, Analyzer, IndexWriter.MaxFieldLength)
          use IndexWriter.IndexWriter(Directory, IndexWriterConfig) instead 
org.apache.lucene.analysis.LengthFilter(TokenStream, int, int)
          Use LengthFilter.LengthFilter(boolean, TokenStream, int, int) instead. 
org.apache.lucene.analysis.LetterTokenizer(AttributeSource.AttributeFactory, Reader)
          use LetterTokenizer.LetterTokenizer(Version, AttributeSource.AttributeFactory, Reader) instead. This will be removed in Lucene 4.0. 
org.apache.lucene.analysis.LetterTokenizer(AttributeSource, Reader)
          use LetterTokenizer.LetterTokenizer(Version, AttributeSource, Reader) instead. This will be removed in Lucene 4.0. 
org.apache.lucene.analysis.LetterTokenizer(Reader)
          use LetterTokenizer.LetterTokenizer(Version, Reader) instead. This will be removed in Lucene 4.0. 
org.apache.lucene.analysis.LowerCaseFilter(TokenStream)
          Use LowerCaseFilter.LowerCaseFilter(Version, TokenStream) instead. 
org.apache.lucene.analysis.LowerCaseTokenizer(AttributeSource.AttributeFactory, Reader)
          use LowerCaseTokenizer.LowerCaseTokenizer(Version, AttributeSource.AttributeFactory, Reader) instead. This will be removed in Lucene 4.0. 
org.apache.lucene.analysis.LowerCaseTokenizer(AttributeSource, Reader)
          use LowerCaseTokenizer.LowerCaseTokenizer(Version, AttributeSource, Reader) instead. This will be removed in Lucene 4.0. 
org.apache.lucene.analysis.LowerCaseTokenizer(Reader)
          use LowerCaseTokenizer.LowerCaseTokenizer(Version, Reader) instead. This will be removed in Lucene 4.0. 
org.apache.lucene.store.NoLockFactory()
          This constructor was not intended to be public and should not be used. It will be made private in Lucene 4.0 
org.apache.lucene.store.RAMInputStream(RAMFile)
           
org.apache.lucene.search.Scorer(Similarity)
          Use Scorer.Scorer(Weight) instead. 
org.apache.lucene.search.Scorer(Similarity, Weight)
          Use Scorer.Scorer(Weight) instead. 
org.apache.lucene.analysis.SimpleAnalyzer()
          use SimpleAnalyzer.SimpleAnalyzer(Version) instead 
org.apache.lucene.store.SimpleFSDirectory.SimpleFSIndexInput(File, int, int)
          please pass resourceDesc 
org.apache.lucene.analysis.standard.StandardFilter(TokenStream)
          Use StandardFilter.StandardFilter(Version, TokenStream) instead. 
org.apache.lucene.analysis.StopFilter(boolean, TokenStream, Set)
          use StopFilter.StopFilter(Version, TokenStream, Set) instead 
org.apache.lucene.analysis.StopFilter(boolean, TokenStream, Set, boolean)
          use StopFilter.StopFilter(Version, TokenStream, Set, boolean) instead 
org.apache.lucene.analysis.standard.UAX29URLEmailTokenizer(AttributeSource.AttributeFactory, Reader)
          use UAX29URLEmailTokenizer.UAX29URLEmailTokenizer(Version, AttributeSource.AttributeFactory, Reader) instead. 
org.apache.lucene.analysis.standard.UAX29URLEmailTokenizer(AttributeSource, Reader)
          use UAX29URLEmailTokenizer.UAX29URLEmailTokenizer(Version, AttributeSource, Reader) instead. 
org.apache.lucene.analysis.standard.UAX29URLEmailTokenizer(InputStream)
          use UAX29URLEmailTokenizer.UAX29URLEmailTokenizer(Version, Reader) instead. 
org.apache.lucene.analysis.standard.UAX29URLEmailTokenizer(Reader)
          use UAX29URLEmailTokenizer.UAX29URLEmailTokenizer(Version, Reader) instead. 
org.apache.lucene.analysis.WhitespaceAnalyzer()
          use WhitespaceAnalyzer.WhitespaceAnalyzer(Version) instead 
org.apache.lucene.analysis.WhitespaceTokenizer(AttributeSource.AttributeFactory, Reader)
          use WhitespaceTokenizer.WhitespaceTokenizer(Version, AttributeSource.AttributeFactory, Reader) instead. This will be removed in Lucene 4.0. 
org.apache.lucene.analysis.WhitespaceTokenizer(AttributeSource, Reader)
          use WhitespaceTokenizer.WhitespaceTokenizer(Version, AttributeSource, Reader) instead. This will be removed in Lucene 4.0. 
org.apache.lucene.analysis.WhitespaceTokenizer(Reader)
          use WhitespaceTokenizer.WhitespaceTokenizer(Version, Reader) instead. This will be removed in Lucene 4.0. 
 

Deprecated Enum Constants
org.apache.lucene.util.Version.LUCENE_20
          (3.1) Use latest 
org.apache.lucene.util.Version.LUCENE_21
          (3.1) Use latest 
org.apache.lucene.util.Version.LUCENE_22
          (3.1) Use latest 
org.apache.lucene.util.Version.LUCENE_23
          (3.1) Use latest 
org.apache.lucene.util.Version.LUCENE_24
          (3.1) Use latest 
org.apache.lucene.util.Version.LUCENE_29
          (3.1) Use latest 
org.apache.lucene.util.Version.LUCENE_CURRENT
          Use an actual version instead. 
 



Copyright © 2000-2011 Apache Software Foundation. All Rights Reserved.