org.apache.lucene.index.IndexReader.acquireWriteLock()
Write support will be removed in Lucene 4.0.
|
org.apache.lucene.analysis.PerFieldAnalyzerWrapper.addAnalyzer(String, Analyzer)
Changing the Analyzer for a field after instantiation prevents
reusability. Analyzers for fields should be set during construction.
|
org.apache.lucene.index.IndexWriter.addIndexesNoOptimize(Directory...)
|
org.apache.lucene.queryParser.core.processors.QueryNodeProcessorPipeline.addProcessor(QueryNodeProcessor)
|
org.apache.lucene.analysis.query.QueryAutoStopWordAnalyzer.addStopWords(IndexReader)
|
org.apache.lucene.analysis.query.QueryAutoStopWordAnalyzer.addStopWords(IndexReader, float)
|
org.apache.lucene.analysis.query.QueryAutoStopWordAnalyzer.addStopWords(IndexReader, int)
|
org.apache.lucene.analysis.query.QueryAutoStopWordAnalyzer.addStopWords(IndexReader, String, float)
|
org.apache.lucene.analysis.query.QueryAutoStopWordAnalyzer.addStopWords(IndexReader, String, int)
|
org.apache.lucene.util._TestUtil.arrayToString(int[])
-- in 3.0 we can use Arrays.toString
instead
|
org.apache.lucene.util._TestUtil.arrayToString(Object[])
-- in 3.0 we can use Arrays.toString
instead
|
org.apache.lucene.util.LuceneTestCase.assertEquals(double, double) |
org.apache.lucene.util.LuceneTestCase.assertEquals(float, float) |
org.apache.lucene.util.LuceneTestCase.assertEquals(String, double, double) |
org.apache.lucene.util.LuceneTestCase.assertEquals(String, float, float) |
org.apache.lucene.search.MultiTermQueryWrapperFilter.clearTotalNumberOfTerms()
Don't use this method, as its not thread safe and useless.
|
org.apache.lucene.search.MultiTermQuery.clearTotalNumberOfTerms()
Don't use this method, as its not thread safe and useless.
|
org.apache.lucene.index.SegmentReader.clone(boolean) |
org.apache.lucene.index.ParallelReader.clone(boolean)
|
org.apache.lucene.index.MultiReader.clone(boolean)
|
org.apache.lucene.index.IndexReader.clone(boolean)
|
org.apache.lucene.index.SegmentReader.cloneDeletedDocs(BitVector) |
org.apache.lucene.index.SegmentReader.cloneNormBytes(byte[]) |
org.apache.lucene.index.IndexReader.commit()
Write support will be removed in Lucene 4.0.
|
org.apache.lucene.index.IndexReader.commit(Map)
Write support will be removed in Lucene 4.0.
|
org.apache.lucene.queryParser.core.nodes.QueryNodeImpl.containsTag(CharSequence)
|
org.apache.lucene.queryParser.core.nodes.QueryNode.containsTag(CharSequence)
|
org.apache.lucene.store.Directory.copy(Directory, Directory, boolean)
should be replaced with calls to
Directory.copy(Directory, String, String) for every file that
needs copying. You can use the following code:
IndexFileNameFilter filter = IndexFileNameFilter.getFilter();
for (String file : src.listAll()) {
if (filter.accept(null, file)) {
src.copy(dest, file, file);
}
}
|
org.apache.lucene.analysis.CharArraySet.copy(Set>)
|
org.apache.lucene.search.Searcher.createWeight(Query)
|
org.apache.lucene.util.IndexableBinaryStringTools.decode(CharBuffer)
|
org.apache.lucene.util.IndexableBinaryStringTools.decode(CharBuffer, ByteBuffer)
|
org.apache.lucene.search.Similarity.decodeNorm(byte)
|
org.apache.lucene.index.IndexReader.deleteDocument(int)
|
org.apache.lucene.index.IndexReader.deleteDocuments(Term)
|
org.apache.lucene.index.SegmentReader.doCommit(Map) |
org.apache.lucene.index.ParallelReader.doCommit(Map) |
org.apache.lucene.index.MultiReader.doCommit(Map) |
org.apache.lucene.index.IndexReader.doCommit(Map)
Write support will be removed in Lucene 4.0.
|
org.apache.lucene.index.FilterIndexReader.doCommit(Map) |
org.apache.lucene.index.SegmentReader.doDelete(int) |
org.apache.lucene.index.ParallelReader.doDelete(int) |
org.apache.lucene.index.MultiReader.doDelete(int) |
org.apache.lucene.index.IndexReader.doDelete(int)
|
org.apache.lucene.index.FilterIndexReader.doDelete(int) |
org.apache.lucene.index.SegmentReader.doOpenIfChanged(boolean) |
org.apache.lucene.index.ParallelReader.doOpenIfChanged(boolean)
|
org.apache.lucene.index.MultiReader.doOpenIfChanged(boolean)
|
org.apache.lucene.index.IndexReader.doOpenIfChanged(boolean)
|
org.apache.lucene.index.SegmentReader.doSetNorm(int, String, byte) |
org.apache.lucene.index.ParallelReader.doSetNorm(int, String, byte) |
org.apache.lucene.index.MultiReader.doSetNorm(int, String, byte) |
org.apache.lucene.index.IndexReader.doSetNorm(int, String, byte)
Write support will be removed in Lucene 4.0.
There will be no replacement for this method.
|
org.apache.lucene.index.FilterIndexReader.doSetNorm(int, String, byte) |
org.apache.lucene.index.SegmentReader.doUndeleteAll() |
org.apache.lucene.index.ParallelReader.doUndeleteAll() |
org.apache.lucene.index.MultiReader.doUndeleteAll() |
org.apache.lucene.index.IndexReader.doUndeleteAll()
Write support will be removed in Lucene 4.0.
There will be no replacement for this method.
|
org.apache.lucene.index.FilterIndexReader.doUndeleteAll() |
org.apache.lucene.util.IndexableBinaryStringTools.encode(ByteBuffer)
|
org.apache.lucene.util.IndexableBinaryStringTools.encode(ByteBuffer, CharBuffer)
|
org.apache.lucene.search.Similarity.encodeNorm(float)
|
org.tartarus.snowball.SnowballProgram.eq_s_b(int, String)
for binary back compat. Will be removed in Lucene 4.0
|
org.tartarus.snowball.SnowballProgram.eq_s(int, String)
for binary back compat. Will be removed in Lucene 4.0
|
org.tartarus.snowball.SnowballProgram.eq_v_b(StringBuilder)
for binary back compat. Will be removed in Lucene 4.0
|
org.tartarus.snowball.SnowballProgram.eq_v(StringBuilder)
for binary back compat. Will be removed in Lucene 4.0
|
org.apache.lucene.util.RamUsageEstimator.estimateRamUsage(Object)
|
org.apache.lucene.index.IndexWriter.expungeDeletes() |
org.apache.lucene.index.IndexWriter.expungeDeletes(boolean) |
org.apache.lucene.store.Directory.fileModified(String) |
org.apache.lucene.search.ChainedFilter.finalResult(OpenBitSetDISI, int)
Either use CachingWrapperFilter, or
switch to a different DocIdSet implementation yourself.
This method will be removed in Lucene 4.0
|
org.apache.lucene.index.IndexReader.flush()
Write support will be removed in Lucene 4.0.
|
org.apache.lucene.index.IndexReader.flush(Map)
Write support will be removed in Lucene 4.0.
|
org.apache.lucene.queryParser.CharStream.getColumn() |
org.apache.lucene.benchmark.byTask.feeds.demohtml.SimpleCharStream.getColumn() |
org.apache.lucene.queryParser.standard.parser.JavaCharStream.getColumn() |
org.apache.lucene.queryParser.surround.parser.CharStream.getColumn() |
org.apache.lucene.index.IndexReader.getCommitUserData()
|
org.apache.lucene.index.IndexReader.getCommitUserData(Directory)
|
org.apache.lucene.index.IndexReader.getCurrentVersion(Directory)
|
org.apache.lucene.util.IndexableBinaryStringTools.getDecodedLength(CharBuffer)
|
org.apache.lucene.index.IndexWriter.getDefaultWriteLockTimeout()
|
org.apache.lucene.index.PayloadProcessorProvider.getDirProcessor(Directory)
|
org.apache.lucene.analysis.StopFilter.getEnablePositionIncrementsVersionDefault(Version)
|
org.apache.lucene.util.IndexableBinaryStringTools.getEncodedLength(ByteBuffer)
|
org.apache.lucene.document.Document.getField(String)
|
org.apache.lucene.queryParser.core.config.QueryConfigHandler.getFieldConfig(CharSequence)
|
org.apache.lucene.queryParser.core.config.FieldConfig.getFieldName()
|
org.apache.lucene.queryParser.QueryParser.getFieldQuery(String, String)
|
org.apache.lucene.queryParser.standard.QueryParserWrapper.getFieldQuery(String, String)
|
org.apache.lucene.document.Document.getFields(String)
|
org.apache.lucene.search.vectorhighlight.BaseFragmentsBuilder.getFieldValues(IndexReader, int, String) |
org.apache.lucene.store.FSDirectory.getFile()
|
org.apache.lucene.search.vectorhighlight.BaseFragmentsBuilder.getFragmentSource(StringBuilder, int[], String[], int, int) |
org.apache.lucene.analysis.compound.HyphenationCompoundWordTokenFilter.getHyphenationTree(Reader)
|
org.apache.lucene.queryParser.CharStream.getLine() |
org.apache.lucene.benchmark.byTask.feeds.demohtml.SimpleCharStream.getLine() |
org.apache.lucene.queryParser.standard.parser.JavaCharStream.getLine() |
org.apache.lucene.queryParser.surround.parser.CharStream.getLine() |
org.apache.lucene.index.IndexWriter.getMaxBufferedDeleteTerms()
|
org.apache.lucene.index.IndexWriter.getMaxBufferedDocs()
|
org.apache.lucene.index.IndexWriter.getMaxFieldLength()
|
org.apache.lucene.index.IndexWriter.getMaxMergeDocs()
|
org.apache.lucene.index.LogByteSizeMergePolicy.getMaxMergeMBForOptimize()
|
org.apache.lucene.index.IndexWriter.getMergedSegmentWarmer()
|
org.apache.lucene.index.IndexWriter.getMergeFactor()
|
org.apache.lucene.index.IndexWriter.getMergePolicy()
|
org.apache.lucene.index.IndexWriter.getMergeScheduler()
|
org.apache.lucene.search.Similarity.getNormDecoder()
Use instance methods for encoding/decoding norm values to enable customization.
|
org.apache.lucene.document.AbstractField.getOmitTermFreqAndPositions()
|
org.apache.lucene.index.IndexWriter.getRAMBufferSizeMB()
|
org.apache.lucene.index.IndexWriter.getReader()
|
org.apache.lucene.index.IndexWriter.getReader(int)
|
org.apache.lucene.index.IndexWriter.getReaderTermsIndexDivisor()
|
org.apache.lucene.index.IndexWriter.getSimilarity()
|
org.apache.lucene.search.Scorer.getSimilarity()
Store any Similarity you might need privately in your implementation instead.
|
org.apache.lucene.search.Query.getSimilarity(Searcher)
Instead of using "runtime" subclassing/delegation, subclass the Weight instead.
|
org.apache.lucene.queryParser.core.nodes.QueryNodeImpl.getTag(CharSequence)
|
org.apache.lucene.queryParser.core.nodes.QueryNode.getTag(CharSequence)
|
org.apache.lucene.queryParser.core.nodes.QueryNodeImpl.getTags()
|
org.apache.lucene.queryParser.core.nodes.QueryNode.getTags()
|
org.apache.lucene.index.IndexWriter.getTermIndexInterval()
|
org.apache.lucene.index.IndexCommit.getTimestamp()
|
org.apache.lucene.search.MultiTermQueryWrapperFilter.getTotalNumberOfTerms()
Don't use this method, as its not thread safe and useless.
|
org.apache.lucene.search.MultiTermQuery.getTotalNumberOfTerms()
Don't use this method, as its not thread safe and useless.
|
org.apache.lucene.index.IndexWriter.getUseCompoundFile()
|
org.apache.lucene.index.IndexCommit.getVersion()
|
org.apache.lucene.index.IndexWriter.getWriteLockTimeout()
|
org.apache.lucene.search.MultiTermQuery.incTotalNumberOfTerms(int)
Don't use this method, as its not thread safe and useless.
|
org.tartarus.snowball.SnowballProgram.insert(int, int, String)
for binary back compat. Will be removed in Lucene 4.0
|
org.tartarus.snowball.SnowballProgram.insert(int, int, StringBuilder)
for binary back compat. Will be removed in Lucene 4.0
|
org.apache.lucene.index.ParallelReader.isOptimized() |
org.apache.lucene.index.MultiReader.isOptimized() |
org.apache.lucene.index.IndexReader.isOptimized()
|
org.apache.lucene.index.FilterIndexReader.isOptimized() |
org.apache.lucene.store.instantiated.InstantiatedIndexReader.isOptimized() |
org.apache.lucene.analysis.standard.StandardTokenizer.isReplaceInvalidAcronym()
Remove in 3.X and make true the only valid value
|
org.apache.lucene.analysis.standard.ClassicTokenizer.isReplaceInvalidAcronym()
Remove in 3.X and make true the only valid value
|
org.apache.lucene.analysis.CharTokenizer.isTokenChar(char)
|
org.apache.lucene.index.IndexReader.lastModified(Directory)
|
org.apache.lucene.search.Similarity.lengthNorm(String, int)
Please override computeNorm instead
|
org.apache.lucene.search.similar.MoreLikeThis.like(File)
|
org.apache.lucene.search.similar.MoreLikeThis.like(InputStream)
|
org.apache.lucene.search.similar.MoreLikeThis.like(Reader)
|
org.apache.lucene.search.similar.MoreLikeThis.like(URL)
|
org.apache.lucene.analysis.cz.CzechAnalyzer.loadStopWords(InputStream, String)
|
org.apache.lucene.analysis.compound.CompoundWordTokenFilterBase.makeDictionary(Version, String[])
Only available for backwards compatibility.
|
org.apache.lucene.search.vectorhighlight.BaseFragmentsBuilder.makeFragment(StringBuilder, int[], String[], FieldFragList.WeightedFragInfo) |
org.apache.lucene.analysis.StopFilter.makeStopSet(List>)
|
org.apache.lucene.analysis.StopFilter.makeStopSet(List>, boolean)
|
org.apache.lucene.analysis.StopFilter.makeStopSet(String...)
|
org.apache.lucene.analysis.StopFilter.makeStopSet(String[], boolean)
|
org.apache.lucene.search.SearcherManager.maybeReopen()
|
org.apache.lucene.analysis.CharTokenizer.normalize(char)
|
org.apache.lucene.index.IndexReader.open(Directory, boolean)
|
org.apache.lucene.index.IndexReader.open(Directory, IndexDeletionPolicy, boolean)
|
org.apache.lucene.index.IndexReader.open(Directory, IndexDeletionPolicy, boolean, int)
|
org.apache.lucene.index.IndexReader.open(IndexCommit, boolean)
|
org.apache.lucene.index.IndexReader.open(IndexCommit, IndexDeletionPolicy, boolean)
|
org.apache.lucene.index.IndexReader.open(IndexCommit, IndexDeletionPolicy, boolean, int)
|
org.apache.lucene.index.IndexReader.openIfChanged(IndexReader, boolean)
|
org.apache.lucene.index.IndexWriter.optimize() |
org.apache.lucene.index.IndexWriter.optimize(boolean) |
org.apache.lucene.index.IndexWriter.optimize(int) |
org.apache.lucene.index.SegmentInfos.range(int, int)
use asList().subList(first, last)
instead.
|
org.apache.lucene.store.DataInput.readChars(char[], int, int)
-- please use readString or readBytes
instead, and construct the string
from those utf8 bytes
|
org.apache.lucene.index.SegmentInfos.readCurrentVersion(Directory)
|
org.apache.lucene.index.IndexReader.reopen()
|
org.apache.lucene.index.IndexReader.reopen(boolean)
|
org.apache.lucene.index.IndexReader.reopen(IndexCommit)
|
org.apache.lucene.index.IndexReader.reopen(IndexWriter, boolean)
|
org.tartarus.snowball.SnowballProgram.replace_s(int, int, String)
for binary back compat. Will be removed in Lucene 4.0
|
org.apache.lucene.analysis.tokenattributes.CharTermAttributeImpl.resizeTermBuffer(int) |
org.apache.lucene.search.similar.MoreLikeThis.retrieveInterestingTerms(Reader)
|
org.apache.lucene.search.similar.MoreLikeThis.retrieveTerms(Reader)
|
org.apache.lucene.analysis.reverse.ReverseStringFilter.reverse(char[])
|
org.apache.lucene.analysis.reverse.ReverseStringFilter.reverse(char[], int)
|
org.apache.lucene.analysis.reverse.ReverseStringFilter.reverse(char[], int, int)
|
org.apache.lucene.analysis.reverse.ReverseStringFilter.reverse(String)
|
org.apache.lucene.analysis.fr.ElisionFilter.setArticles(Set>)
|
org.apache.lucene.analysis.fr.ElisionFilter.setArticles(Version, Set>)
|
org.apache.lucene.queryParser.core.builders.QueryTreeBuilder.setBuilder(CharSequence, QueryBuilder)
|
org.apache.lucene.queryParser.standard.StandardQueryParser.setDateResolution(Map)
|
org.apache.lucene.queryParser.standard.StandardQueryParser.setDefaultOperator(DefaultOperatorAttribute.Operator) |
org.apache.lucene.queryParser.standard.StandardQueryParser.setDefaultPhraseSlop(int)
|
org.apache.lucene.index.IndexWriter.setDefaultWriteLockTimeout(long)
|
org.apache.lucene.analysis.de.GermanStemFilter.setExclusionSet(Set>)
|
org.apache.lucene.analysis.nl.DutchStemFilter.setExclusionTable(HashSet>)
|
org.apache.lucene.analysis.fr.FrenchStemFilter.setExclusionTable(Map, ?>)
|
org.apache.lucene.index.IndexWriter.setMaxBufferedDeleteTerms(int)
|
org.apache.lucene.index.IndexWriter.setMaxBufferedDocs(int)
|
org.apache.lucene.index.IndexWriter.setMaxFieldLength(int)
use LimitTokenCountAnalyzer instead. Note that the
behvaior slightly changed - the analyzer limits the number of
tokens per token stream created, while this setting limits the
total number of tokens to index. This only matters if you index
many multi-valued fields though.
|
org.apache.lucene.index.IndexWriter.setMaxMergeDocs(int)
|
org.apache.lucene.index.LogByteSizeMergePolicy.setMaxMergeMBForOptimize(double)
|
org.apache.lucene.analysis.shingle.ShingleAnalyzerWrapper.setMaxShingleSize(int)
Setting maxShingleSize after Analyzer instantiation prevents reuse.
Confgure maxShingleSize during construction.
|
org.apache.lucene.index.IndexWriter.setMergedSegmentWarmer(IndexWriter.IndexReaderWarmer)
|
org.apache.lucene.index.IndexWriter.setMergeFactor(int)
|
org.apache.lucene.index.IndexWriter.setMergePolicy(MergePolicy)
|
org.apache.lucene.index.IndexWriter.setMergeScheduler(MergeScheduler)
|
org.apache.lucene.analysis.shingle.ShingleAnalyzerWrapper.setMinShingleSize(int)
Setting minShingleSize after Analyzer instantiation prevents reuse.
Confgure minShingleSize during construction.
|
org.apache.lucene.index.IndexReader.setNorm(int, String, byte)
Write support will be removed in Lucene 4.0.
There will be no replacement for this method.
|
org.apache.lucene.index.IndexReader.setNorm(int, String, float)
Write support will be removed in Lucene 4.0.
There will be no replacement for this method.
|
org.apache.lucene.document.AbstractField.setOmitTermFreqAndPositions(boolean)
|
org.apache.lucene.analysis.shingle.ShingleAnalyzerWrapper.setOutputUnigrams(boolean)
Setting outputUnigrams after Analyzer instantiation prevents reuse.
Confgure outputUnigrams during construction.
|
org.apache.lucene.analysis.shingle.ShingleAnalyzerWrapper.setOutputUnigramsIfNoShingles(boolean)
Setting outputUnigramsIfNoShingles after Analyzer instantiation prevents reuse.
Confgure outputUnigramsIfNoShingles during construction.
|
org.apache.lucene.index.IndexWriter.setRAMBufferSizeMB(double)
|
org.apache.lucene.index.IndexWriter.setReaderTermsIndexDivisor(int)
|
org.apache.lucene.analysis.standard.StandardTokenizer.setReplaceInvalidAcronym(boolean)
Remove in 3.X and make true the only valid value
See https://issues.apache.org/jira/browse/LUCENE-1068
|
org.apache.lucene.analysis.standard.ClassicTokenizer.setReplaceInvalidAcronym(boolean)
Remove in 3.X and make true the only valid value
See https://issues.apache.org/jira/browse/LUCENE-1068
|
org.apache.lucene.index.IndexWriter.setSimilarity(Similarity)
|
org.apache.lucene.analysis.nl.DutchAnalyzer.setStemDictionary(File)
This prevents reuse of TokenStreams. If you wish to use a custom
stem dictionary, create your own Analyzer with StemmerOverrideFilter
|
org.apache.lucene.analysis.br.BrazilianAnalyzer.setStemExclusionTable(File)
|
org.apache.lucene.analysis.de.GermanAnalyzer.setStemExclusionTable(File)
|
org.apache.lucene.analysis.fr.FrenchAnalyzer.setStemExclusionTable(File)
|
org.apache.lucene.analysis.nl.DutchAnalyzer.setStemExclusionTable(File)
|
org.apache.lucene.analysis.nl.DutchAnalyzer.setStemExclusionTable(HashSet>)
|
org.apache.lucene.analysis.br.BrazilianAnalyzer.setStemExclusionTable(Map, ?>)
|
org.apache.lucene.analysis.de.GermanAnalyzer.setStemExclusionTable(Map, ?>)
|
org.apache.lucene.analysis.fr.FrenchAnalyzer.setStemExclusionTable(Map, ?>)
|
org.apache.lucene.analysis.br.BrazilianAnalyzer.setStemExclusionTable(String...)
|
org.apache.lucene.analysis.fr.FrenchAnalyzer.setStemExclusionTable(String...)
|
org.apache.lucene.analysis.nl.DutchAnalyzer.setStemExclusionTable(String...)
|
org.apache.lucene.analysis.de.GermanAnalyzer.setStemExclusionTable(String[])
|
org.apache.lucene.queryParser.core.nodes.QueryNodeImpl.setTag(CharSequence, Object)
|
org.apache.lucene.queryParser.core.nodes.QueryNode.setTag(CharSequence, Object)
|
org.apache.lucene.analysis.tokenattributes.CharTermAttributeImpl.setTermBuffer(char[], int, int) |
org.apache.lucene.analysis.tokenattributes.CharTermAttributeImpl.setTermBuffer(String) |
org.apache.lucene.analysis.tokenattributes.CharTermAttributeImpl.setTermBuffer(String, int, int) |
org.apache.lucene.index.IndexWriter.setTermIndexInterval(int)
|
org.apache.lucene.analysis.tokenattributes.CharTermAttributeImpl.setTermLength(int) |
org.apache.lucene.index.ConcurrentMergeScheduler.setTestMode()
this test mode code will be removed in a future release
|
org.apache.lucene.analysis.shingle.ShingleAnalyzerWrapper.setTokenSeparator(String)
Setting tokenSeparator after Analyzer instantiation prevents reuse.
Confgure tokenSeparator during construction.
|
org.apache.lucene.index.IndexWriter.setUseCompoundFile(boolean)
|
org.apache.lucene.index.IndexWriter.setWriteLockTimeout(long)
|
org.apache.lucene.store.IndexInput.skipChars(int)
this method operates on old "modified utf8" encoded
strings
|
org.tartarus.snowball.SnowballProgram.slice_from(String)
for binary back compat. Will be removed in Lucene 4.0
|
org.tartarus.snowball.SnowballProgram.slice_from(StringBuilder)
for binary back compat. Will be removed in Lucene 4.0
|
org.apache.lucene.index.MultiPassIndexSplitter.split(IndexReader, Directory[], boolean)
|
org.apache.lucene.analysis.CharArraySet.stringIterator()
|
org.apache.lucene.search.spell.SpellChecker.suggestSimilar(String, int, IndexReader, String, boolean)
use suggestSimilar(String, int, IndexReader, String, SuggestMode)
- SuggestMode.SUGGEST_WHEN_NOT_IN_INDEX instead of morePopular=false
- SuggestMode.SuGGEST_MORE_POPULAR instead of morePopular=true
|
org.apache.lucene.search.spell.SpellChecker.suggestSimilar(String, int, IndexReader, String, boolean, float)
use suggestSimilar(String, int, IndexReader, String, SuggestMode, float)
- SuggestMode.SUGGEST_WHEN_NOT_IN_INDEX instead of morePopular=false
- SuggestMode.SuGGEST_MORE_POPULAR instead of morePopular=true
|
org.apache.lucene.store.MockDirectoryWrapper.sync(String) |
org.apache.lucene.store.FSDirectory.sync(String) |
org.apache.lucene.store.FileSwitchDirectory.sync(String) |
org.apache.lucene.store.Directory.sync(String)
|
org.apache.lucene.analysis.tokenattributes.CharTermAttributeImpl.term() |
org.apache.lucene.analysis.tokenattributes.CharTermAttributeImpl.termBuffer() |
org.apache.lucene.analysis.tokenattributes.CharTermAttributeImpl.termLength() |
org.apache.lucene.store.MockDirectoryWrapper.touchFile(String) |
org.apache.lucene.store.RAMDirectory.touchFile(String)
Lucene never uses this API; it will be
removed in 4.0.
|
org.apache.lucene.store.NRTCachingDirectory.touchFile(String) |
org.apache.lucene.store.FSDirectory.touchFile(String)
Lucene never uses this API; it will be
removed in 4.0.
|
org.apache.lucene.store.FileSwitchDirectory.touchFile(String) |
org.apache.lucene.store.Directory.touchFile(String)
Lucene never uses this API; it will be
removed in 4.0.
|
org.apache.lucene.index.IndexReader.undeleteAll()
Write support will be removed in Lucene 4.0.
There will be no replacement for this method.
|
org.apache.lucene.queryParser.core.nodes.QueryNodeImpl.unsetTag(CharSequence)
|
org.apache.lucene.queryParser.core.nodes.QueryNode.unsetTag(CharSequence)
|
org.apache.lucene.search.Query.weight(Searcher)
|
org.apache.lucene.store.DataOutput.writeChars(char[], int, int)
|
org.apache.lucene.store.DataOutput.writeChars(String, int, int)
|