| Deprecated Methods | 
| org.apache.lucene.index.IndexWriter.addIndexesNoOptimize(Directory...) use
 IndexWriter.addIndexes(Directory...)instead | 
| org.apache.lucene.store.Directory.copy(Directory, Directory, boolean) should be replaced with calls to
 Directory.copy(Directory, String, String)for every file that
             needs copying. You can use the following code:
 IndexFileNameFilter filter = IndexFileNameFilter.getFilter();
 for (String file : src.listAll()) {
   if (filter.accept(null, file)) {
     src.copy(dest, file, file);
   }
 }
  | 
| org.apache.lucene.analysis.CharArraySet.copy(Set>) use
 CharArraySet.copy(Version, Set)instead. | 
| org.apache.lucene.util.IndexableBinaryStringTools.decode(CharBuffer) Use
 IndexableBinaryStringTools.decode(char[], int, int, byte[], int, int)instead. This method will be removed in Lucene 4.0 | 
| org.apache.lucene.util.IndexableBinaryStringTools.decode(CharBuffer, ByteBuffer) Use
 IndexableBinaryStringTools.decode(char[], int, int, byte[], int, int)instead. This method will be removed in Lucene 4.0 | 
| org.apache.lucene.search.Similarity.decodeNorm(byte) Use
 Similarity.decodeNormValue(byte)instead. | 
| org.apache.lucene.util.IndexableBinaryStringTools.encode(ByteBuffer) Use
 IndexableBinaryStringTools.encode(byte[], int, int, char[], int, int)instead. This method will be removed in Lucene 4.0 | 
| org.apache.lucene.util.IndexableBinaryStringTools.encode(ByteBuffer, CharBuffer) Use
 IndexableBinaryStringTools.encode(byte[], int, int, char[], int, int)instead. This method will be removed in Lucene 4.0 | 
| org.apache.lucene.search.Similarity.encodeNorm(float) Use
 Similarity.encodeNormValue(float)instead. | 
| org.apache.lucene.queryParser.CharStream.getColumn() 
 | 
| org.apache.lucene.util.IndexableBinaryStringTools.getDecodedLength(CharBuffer) Use
 IndexableBinaryStringTools.getDecodedLength(char[], int, int)instead. This
             method will be removed in Lucene 4.0 | 
| org.apache.lucene.index.IndexWriter.getDefaultWriteLockTimeout() use
 IndexWriterConfig.getDefaultWriteLockTimeout()instead | 
| org.apache.lucene.analysis.StopFilter.getEnablePositionIncrementsVersionDefault(Version) use
 StopFilter.StopFilter(Version, TokenStream, Set)instead | 
| org.apache.lucene.util.IndexableBinaryStringTools.getEncodedLength(ByteBuffer) Use
 IndexableBinaryStringTools.getEncodedLength(byte[], int, int)instead. This
             method will be removed in Lucene 4.0 | 
| org.apache.lucene.queryParser.QueryParser.getFieldQuery(String, String) Use
 QueryParser.getFieldQuery(String,String,boolean)instead. | 
| org.apache.lucene.store.FSDirectory.getFile() Use
 FSDirectory.getDirectory()instead. | 
| org.apache.lucene.queryParser.CharStream.getLine() 
 | 
| org.apache.lucene.index.IndexWriter.getMaxBufferedDeleteTerms() use
 IndexWriterConfig.getMaxBufferedDeleteTerms()instead | 
| org.apache.lucene.index.IndexWriter.getMaxBufferedDocs() use
 IndexWriterConfig.getMaxBufferedDocs()instead. | 
| org.apache.lucene.index.IndexWriter.getMaxFieldLength() use
 LimitTokenCountAnalyzerto limit number of tokens. | 
| org.apache.lucene.index.IndexWriter.getMaxMergeDocs() use
 LogMergePolicy.getMaxMergeDocs()directly. | 
| org.apache.lucene.index.IndexWriter.getMergedSegmentWarmer() use
 IndexWriterConfig.getMergedSegmentWarmer()instead. | 
| org.apache.lucene.index.IndexWriter.getMergeFactor() use
 LogMergePolicy.getMergeFactor()directly. | 
| org.apache.lucene.index.IndexWriter.getMergePolicy() use
 IndexWriterConfig.getMergePolicy()instead | 
| org.apache.lucene.index.IndexWriter.getMergeScheduler() use
 IndexWriterConfig.getMergeScheduler()instead | 
| org.apache.lucene.search.Similarity.getNormDecoder() Use instance methods for encoding/decoding norm values to enable customization.
 | 
| org.apache.lucene.index.IndexWriter.getRAMBufferSizeMB() use
 IndexWriterConfig.getRAMBufferSizeMB()instead. | 
| org.apache.lucene.index.IndexWriter.getReader() Please use
 IndexReader.open(IndexWriter,boolean)instead. | 
| org.apache.lucene.index.IndexWriter.getReader(int) Please use
 IndexReader.open(IndexWriter,boolean)instead.  Furthermore,
  this method cannot guarantee the reader (and its
  sub-readers) will be opened with the
  termInfosIndexDivisor setting because some of them may
  have already been opened according toIndexWriterConfig.setReaderTermsIndexDivisor(int). You
  should set the requested termInfosIndexDivisor throughIndexWriterConfig.setReaderTermsIndexDivisor(int)and useIndexWriter.getReader(). | 
| org.apache.lucene.index.IndexWriter.getReaderTermsIndexDivisor() use
 IndexWriterConfig.getReaderTermsIndexDivisor()instead. | 
| org.apache.lucene.index.IndexWriter.getSimilarity() use
 IndexWriterConfig.getSimilarity()instead | 
| org.apache.lucene.search.Scorer.getSimilarity() Store any Similarity you might need privately in your implementation instead.
 | 
| org.apache.lucene.search.Query.getSimilarity(Searcher) Instead of using "runtime" subclassing/delegation, subclass the Weight instead.
 | 
| org.apache.lucene.index.IndexWriter.getTermIndexInterval() use
 IndexWriterConfig.getTermIndexInterval() | 
| org.apache.lucene.index.IndexWriter.getUseCompoundFile() use
 LogMergePolicy.getUseCompoundFile() | 
| org.apache.lucene.index.IndexWriter.getWriteLockTimeout() use
 IndexWriterConfig.getWriteLockTimeout() | 
| org.apache.lucene.analysis.standard.StandardTokenizer.isReplaceInvalidAcronym() Remove in 3.X and make true the only valid value
 | 
| org.apache.lucene.analysis.standard.ClassicTokenizer.isReplaceInvalidAcronym() Remove in 3.X and make true the only valid value
 | 
| org.apache.lucene.analysis.CharTokenizer.isTokenChar(char) use
 CharTokenizer.isTokenChar(int)instead. This method will be
             removed in Lucene 4.0. | 
| org.apache.lucene.search.Similarity.lengthNorm(String, int) Please override computeNorm instead
 | 
| org.apache.lucene.analysis.StopFilter.makeStopSet(List>) use
 StopFilter.makeStopSet(Version, List)instead | 
| org.apache.lucene.analysis.StopFilter.makeStopSet(List>, boolean) use
 StopFilter.makeStopSet(Version, List, boolean)instead | 
| org.apache.lucene.analysis.StopFilter.makeStopSet(String...) use
 StopFilter.makeStopSet(Version, String...)instead | 
| org.apache.lucene.analysis.StopFilter.makeStopSet(String[], boolean) use
 StopFilter.makeStopSet(Version, String[], boolean)instead; | 
| org.apache.lucene.analysis.CharTokenizer.normalize(char) use
 CharTokenizer.normalize(int)instead. This method will be
             removed in Lucene 4.0. | 
| org.apache.lucene.store.IndexInput.readChars(char[], int, int) -- please use readString or readBytes
                instead, and construct the string
                from those utf8 bytes
 | 
| org.apache.lucene.analysis.tokenattributes.CharTermAttributeImpl.resizeTermBuffer(int) 
 | 
| org.apache.lucene.index.IndexWriter.setDefaultWriteLockTimeout(long) use
 IndexWriterConfig.setDefaultWriteLockTimeout(long)instead | 
| org.apache.lucene.index.IndexWriter.setMaxBufferedDeleteTerms(int) use
 IndexWriterConfig.setMaxBufferedDeleteTerms(int)instead. | 
| org.apache.lucene.index.IndexWriter.setMaxBufferedDocs(int) use
 IndexWriterConfig.setMaxBufferedDocs(int)instead. | 
| org.apache.lucene.index.IndexWriter.setMaxFieldLength(int) use
 LimitTokenCountAnalyzerinstead. Note that the
             behvaior slightly changed - the analyzer limits the number of
             tokens per token stream created, while this setting limits the
             total number of tokens to index. This only matters if you index
             many multi-valued fields though. | 
| org.apache.lucene.index.IndexWriter.setMaxMergeDocs(int) use
 LogMergePolicy.setMaxMergeDocs(int)directly. | 
| org.apache.lucene.index.IndexWriter.setMergedSegmentWarmer(IndexWriter.IndexReaderWarmer) use
 IndexWriterConfig.setMergedSegmentWarmer(org.apache.lucene.index.IndexWriter.IndexReaderWarmer)instead. | 
| org.apache.lucene.index.IndexWriter.setMergeFactor(int) use
 LogMergePolicy.setMergeFactor(int)directly. | 
| org.apache.lucene.index.IndexWriter.setMergePolicy(MergePolicy) use
 IndexWriterConfig.setMergePolicy(MergePolicy)instead. | 
| org.apache.lucene.index.IndexWriter.setMergeScheduler(MergeScheduler) use
 IndexWriterConfig.setMergeScheduler(MergeScheduler)instead | 
| org.apache.lucene.index.IndexReader.setNorm(int, String, float) Use
 IndexReader.setNorm(int, String, byte)instead, encoding the
 float to byte with your Similarity'sSimilarity.encodeNormValue(float).
 This method will be removed in Lucene 4.0 | 
| org.apache.lucene.index.IndexWriter.setRAMBufferSizeMB(double) use
 IndexWriterConfig.setRAMBufferSizeMB(double)instead. | 
| org.apache.lucene.index.IndexWriter.setReaderTermsIndexDivisor(int) use
 IndexWriterConfig.setReaderTermsIndexDivisor(int)instead. | 
| org.apache.lucene.analysis.standard.StandardTokenizer.setReplaceInvalidAcronym(boolean) Remove in 3.X and make true the only valid value
 See https://issues.apache.org/jira/browse/LUCENE-1068
 | 
| org.apache.lucene.analysis.standard.ClassicTokenizer.setReplaceInvalidAcronym(boolean) Remove in 3.X and make true the only valid value
 See https://issues.apache.org/jira/browse/LUCENE-1068
 | 
| org.apache.lucene.index.IndexWriter.setSimilarity(Similarity) use
 IndexWriterConfig.setSimilarity(Similarity)instead | 
| org.apache.lucene.analysis.tokenattributes.CharTermAttributeImpl.setTermBuffer(char[], int, int) 
 | 
| org.apache.lucene.analysis.tokenattributes.CharTermAttributeImpl.setTermBuffer(String) 
 | 
| org.apache.lucene.analysis.tokenattributes.CharTermAttributeImpl.setTermBuffer(String, int, int) 
 | 
| org.apache.lucene.index.IndexWriter.setTermIndexInterval(int) use
 IndexWriterConfig.setTermIndexInterval(int) | 
| org.apache.lucene.analysis.tokenattributes.CharTermAttributeImpl.setTermLength(int) 
 | 
| org.apache.lucene.index.ConcurrentMergeScheduler.setTestMode() remove all this test mode code in lucene 3.2!
 | 
| org.apache.lucene.index.IndexWriter.setUseCompoundFile(boolean) use
 LogMergePolicy.setUseCompoundFile(boolean). | 
| org.apache.lucene.index.IndexWriter.setWriteLockTimeout(long) use
 IndexWriterConfig.setWriteLockTimeout(long)instead | 
| org.apache.lucene.store.IndexInput.skipChars(int) this method operates on old "modified utf8" encoded
             strings
 | 
| org.apache.lucene.analysis.CharArraySet.stringIterator() Use
 CharArraySet.iterator(), which returnschar[]instances. | 
| org.apache.lucene.store.FSDirectory.sync(String) 
 | 
| org.apache.lucene.store.FileSwitchDirectory.sync(String) 
 | 
| org.apache.lucene.store.Directory.sync(String) use
 Directory.sync(Collection)instead.
 For easy migration you can change your code to call
 sync(Collections.singleton(name)) | 
| org.apache.lucene.analysis.tokenattributes.CharTermAttributeImpl.term() 
 | 
| org.apache.lucene.analysis.tokenattributes.CharTermAttributeImpl.termBuffer() 
 | 
| org.apache.lucene.analysis.tokenattributes.CharTermAttributeImpl.termLength() 
 | 
| org.apache.lucene.store.IndexOutput.writeChars(char[], int, int) -- please pre-convert to utf8 bytes instead or use
 IndexOutput.writeString(java.lang.String) | 
| org.apache.lucene.store.IndexOutput.writeChars(String, int, int) -- please pre-convert to utf8 bytes
 instead or use
 IndexOutput.writeString(java.lang.String) | 
 
| Deprecated Constructors | 
| org.apache.lucene.util.ArrayUtil() This constructor was not intended to be public and should not be used.
  This class contains solely a static utility methods.
  It will be made private in Lucene 4.0
 | 
| org.apache.lucene.analysis.CharArraySet(Collection>, boolean) use
 CharArraySet.CharArraySet(Version, Collection, boolean)instead | 
| org.apache.lucene.analysis.CharArraySet(int, boolean) use
 CharArraySet.CharArraySet(Version, int, boolean)instead | 
| org.apache.lucene.analysis.CharTokenizer(AttributeSource.AttributeFactory, Reader) use
 CharTokenizer.CharTokenizer(Version, AttributeSource.AttributeFactory, Reader)instead. This will be
             removed in Lucene 4.0. | 
| org.apache.lucene.analysis.CharTokenizer(AttributeSource, Reader) use
 CharTokenizer.CharTokenizer(Version, AttributeSource, Reader)instead. This will be
             removed in Lucene 4.0. | 
| org.apache.lucene.analysis.CharTokenizer(Reader) use
 CharTokenizer.CharTokenizer(Version, Reader)instead. This will be
             removed in Lucene 4.0. | 
| org.apache.lucene.document.Field(String, byte[], Field.Store) Use
 instead | 
| org.apache.lucene.document.Field(String, byte[], int, int, Field.Store) Use
 instead | 
| org.apache.lucene.index.IndexWriter(Directory, Analyzer, boolean, IndexDeletionPolicy, IndexWriter.MaxFieldLength) use
 IndexWriter.IndexWriter(Directory, IndexWriterConfig)instead | 
| org.apache.lucene.index.IndexWriter(Directory, Analyzer, boolean, IndexWriter.MaxFieldLength) use
 IndexWriter.IndexWriter(Directory, IndexWriterConfig)instead | 
| org.apache.lucene.index.IndexWriter(Directory, Analyzer, IndexDeletionPolicy, IndexWriter.MaxFieldLength) use
 IndexWriter.IndexWriter(Directory, IndexWriterConfig)instead | 
| org.apache.lucene.index.IndexWriter(Directory, Analyzer, IndexDeletionPolicy, IndexWriter.MaxFieldLength, IndexCommit) use
 IndexWriter.IndexWriter(Directory, IndexWriterConfig)instead | 
| org.apache.lucene.index.IndexWriter(Directory, Analyzer, IndexWriter.MaxFieldLength) use
 IndexWriter.IndexWriter(Directory, IndexWriterConfig)instead | 
| org.apache.lucene.analysis.LengthFilter(TokenStream, int, int) Use
 LengthFilter.LengthFilter(boolean, TokenStream, int, int)instead. | 
| org.apache.lucene.analysis.LetterTokenizer(AttributeSource.AttributeFactory, Reader) use
 LetterTokenizer.LetterTokenizer(Version, AttributeSource.AttributeFactory, Reader)instead. This will be removed in Lucene 4.0. | 
| org.apache.lucene.analysis.LetterTokenizer(AttributeSource, Reader) use
 LetterTokenizer.LetterTokenizer(Version, AttributeSource, Reader)instead.
 This will be removed in Lucene 4.0. | 
| org.apache.lucene.analysis.LetterTokenizer(Reader) use
 LetterTokenizer.LetterTokenizer(Version, Reader)instead. This
             will be removed in Lucene 4.0. | 
| org.apache.lucene.analysis.LowerCaseFilter(TokenStream) Use
 LowerCaseFilter.LowerCaseFilter(Version, TokenStream)instead. | 
| org.apache.lucene.analysis.LowerCaseTokenizer(AttributeSource.AttributeFactory, Reader) use
 LowerCaseTokenizer.LowerCaseTokenizer(AttributeSource.AttributeFactory, Reader)instead. This will be removed in Lucene 4.0. | 
| org.apache.lucene.analysis.LowerCaseTokenizer(AttributeSource, Reader) use
 LowerCaseTokenizer.LowerCaseTokenizer(AttributeSource, Reader)instead. This will be removed in Lucene 4.0. | 
| org.apache.lucene.analysis.LowerCaseTokenizer(Reader) use
 LowerCaseTokenizer.LowerCaseTokenizer(Reader)instead. This will be
             removed in Lucene 4.0. | 
| org.apache.lucene.store.NoLockFactory() This constructor was not intended to be public and should not be used.
  It will be made private in Lucene 4.0
 | 
| org.apache.lucene.search.Scorer(Similarity) Use
 Scorer.Scorer(Weight)instead. | 
| org.apache.lucene.search.Scorer(Similarity, Weight) Use
 Scorer.Scorer(Weight)instead. | 
| org.apache.lucene.analysis.SimpleAnalyzer() use
 SimpleAnalyzer.SimpleAnalyzer(Version)instead | 
| org.apache.lucene.analysis.standard.StandardFilter(TokenStream) Use
 StandardFilter.StandardFilter(Version, TokenStream)instead. | 
| org.apache.lucene.analysis.StopFilter(boolean, TokenStream, Set>) use
 StopFilter.StopFilter(Version, TokenStream, Set)instead | 
| org.apache.lucene.analysis.StopFilter(boolean, TokenStream, Set>, boolean) use
 StopFilter.StopFilter(Version, TokenStream, Set, boolean)instead | 
| org.apache.lucene.analysis.WhitespaceAnalyzer() use
 WhitespaceAnalyzer.WhitespaceAnalyzer(Version)instead | 
| org.apache.lucene.analysis.WhitespaceTokenizer(AttributeSource.AttributeFactory, Reader) use
 WhitespaceTokenizer.WhitespaceTokenizer(Version, AttributeSource.AttributeFactory, Reader)instead. This will be removed in Lucene 4.0. | 
| org.apache.lucene.analysis.WhitespaceTokenizer(AttributeSource, Reader) use
 WhitespaceTokenizer.WhitespaceTokenizer(Version, AttributeSource, Reader)instead. This will be removed in Lucene 4.0. | 
| org.apache.lucene.analysis.WhitespaceTokenizer(Reader) use
 WhitespaceTokenizer.WhitespaceTokenizer(Version, Reader)instead. This will
             be removed in Lucene 4.0. |