Deprecated Methods |
org.apache.lucene.index.IndexWriter.abort()
Please use IndexWriter.rollback() instead. |
org.apache.lucene.queryParser.QueryParser.addClause(Vector, int, int, Query)
use QueryParser.addClause(List, int, int, Query) instead. |
org.apache.lucene.queryParser.precedence.PrecedenceQueryParser.addClause(Vector, int, int, Query)
use PrecedenceQueryParser.addClause(List, int, int, Query) instead. |
org.apache.lucene.index.IndexWriter.addIndexes(Directory[])
Use IndexWriter.addIndexesNoOptimize(org.apache.lucene.store.Directory[]) instead,
then separately call IndexWriter.optimize() afterwards if
you need to. |
org.apache.lucene.util.PriorityQueue.adjustTop()
use PriorityQueue.updateTop() which returns the new top element and
saves an additional call to PriorityQueue.top() . |
org.apache.lucene.document.Field.binaryValue()
This method must allocate a new byte[] if
the AbstractField.getBinaryOffset() is non-zero
or AbstractField.getBinaryLength() is not the
full length of the byte[]. Please use AbstractField.getBinaryValue() instead, which simply
returns the byte[]. |
org.apache.lucene.search.RemoteCachingWrapperFilter.bits(IndexReader)
Use RemoteCachingWrapperFilter.getDocIdSet(IndexReader) instead. |
org.apache.lucene.search.QueryWrapperFilter.bits(IndexReader)
Use QueryWrapperFilter.getDocIdSet(IndexReader) instead. |
org.apache.lucene.search.MultiTermQueryWrapperFilter.bits(IndexReader)
Use MultiTermQueryWrapperFilter.getDocIdSet(IndexReader) instead. |
org.apache.lucene.search.CachingWrapperFilter.bits(IndexReader)
Use CachingWrapperFilter.getDocIdSet(IndexReader) instead. |
org.apache.lucene.search.CachingSpanFilter.bits(IndexReader)
Use CachingSpanFilter.getDocIdSet(IndexReader) instead. |
org.apache.lucene.search.Filter.bits(IndexReader)
Use Filter.getDocIdSet(IndexReader) instead. |
org.apache.lucene.index.CheckIndex.check(Directory, boolean)
Please instantiate a CheckIndex and then use CheckIndex.checkIndex() instead |
org.apache.lucene.index.CheckIndex.check(Directory, boolean, List)
Please instantiate a CheckIndex and then use CheckIndex.checkIndex(List) instead |
org.apache.lucene.search.function.CustomScoreQuery.customExplain(int, Explanation, Explanation)
Will be removed in Lucene 3.1.
The doc is relative to the current reader, which is
unknown to CustomScoreQuery when using per-segment search (since Lucene 2.9).
Please override CustomScoreQuery.getCustomScoreProvider(org.apache.lucene.index.IndexReader) and return a subclass
of CustomScoreProvider for the given IndexReader . |
org.apache.lucene.search.function.CustomScoreQuery.customExplain(int, Explanation, Explanation[])
Will be removed in Lucene 3.1.
The doc is relative to the current reader, which is
unknown to CustomScoreQuery when using per-segment search (since Lucene 2.9).
Please override CustomScoreQuery.getCustomScoreProvider(org.apache.lucene.index.IndexReader) and return a subclass
of CustomScoreProvider for the given IndexReader . |
org.apache.lucene.search.function.CustomScoreQuery.customScore(int, float, float)
Will be removed in Lucene 3.1.
The doc is relative to the current reader, which is
unknown to CustomScoreQuery when using per-segment search (since Lucene 2.9).
Please override CustomScoreQuery.getCustomScoreProvider(org.apache.lucene.index.IndexReader) and return a subclass
of CustomScoreProvider for the given IndexReader . |
org.apache.lucene.search.function.CustomScoreQuery.customScore(int, float, float[])
Will be removed in Lucene 3.1.
The doc is relative to the current reader, which is
unknown to CustomScoreQuery when using per-segment search (since Lucene 2.9).
Please override CustomScoreQuery.getCustomScoreProvider(org.apache.lucene.index.IndexReader) and return a subclass
of CustomScoreProvider for the given IndexReader . |
org.apache.lucene.search.ScoreCachingWrappingScorer.doc()
use ScoreCachingWrappingScorer.docID() instead. |
org.apache.lucene.search.FilteredDocIdSetIterator.doc()
use FilteredDocIdSetIterator.docID() instead. |
org.apache.lucene.search.FieldCacheTermsFilter.FieldCacheTermsFilterDocIdSet.FieldCacheTermsFilterDocIdSetIterator.doc()
use FieldCacheTermsFilter.FieldCacheTermsFilterDocIdSet.FieldCacheTermsFilterDocIdSetIterator.docID() instead. |
org.apache.lucene.search.ConstantScoreQuery.ConstantScorer.doc()
use ConstantScoreQuery.ConstantScorer.docID() instead. |
org.apache.lucene.search.DocIdSetIterator.doc()
use DocIdSetIterator.docID() instead. |
org.apache.lucene.search.spans.SpanScorer.doc()
use SpanScorer.docID() instead. |
org.apache.lucene.util.OpenBitSetIterator.doc()
use OpenBitSetIterator.docID() instead. |
org.apache.lucene.index.IndexWriter.docCount()
Please use IndexWriter.maxDoc() (same as this
method) or IndexWriter.numDocs() (also takes deletions
into account), instead. |
org.apache.lucene.index.ParallelReader.doCommit()
|
org.apache.lucene.index.MultiReader.doCommit()
|
org.apache.lucene.index.FilterIndexReader.doCommit()
|
org.apache.lucene.index.IndexReader.doCommit()
Please implement instead . |
org.apache.lucene.index.SegmentReader.doCommit()
|
org.apache.lucene.search.Scorer.explain(int)
Please use IndexSearcher.explain(org.apache.lucene.search.Weight, int)
or Weight.explain(org.apache.lucene.index.IndexReader, int) instead. |
org.apache.lucene.document.Document.fields()
use Document.getFields() instead |
org.apache.lucene.search.BooleanFilter.finalResult(OpenBitSetDISI, int)
Either use CachingWrapperFilter, or
switch to a different DocIdSet implementation yourself. |
org.apache.lucene.misc.ChainedFilter.finalResult(OpenBitSetDISI, int)
Either use CachingWrapperFilter, or
switch to a different DocIdSet implementation yourself. |
org.apache.lucene.index.IndexWriter.flush()
please call IndexWriter.commit() ) instead |
org.apache.lucene.index.SegmentReader.get(SegmentInfo)
|
org.apache.lucene.search.BooleanQuery.getAllowDocsOutOfOrder()
this is not needed anymore, as
Weight.scoresDocsOutOfOrder() is used. |
org.apache.lucene.search.FieldCache.getAuto(IndexReader, String)
Please specify the exact type, instead.
Especially, guessing does not work with the new
NumericField type. |
org.apache.lucene.search.highlight.Highlighter.getBestFragments(Analyzer, String, int)
This method incorrectly hardcodes the choice of fieldname. Use the
method of the same name that takes a fieldname. |
org.apache.lucene.search.SpanFilterResult.getBits()
Use SpanFilterResult.getDocIdSet() |
org.apache.lucene.queryParser.QueryParser.getBooleanQuery(Vector)
use QueryParser.getBooleanQuery(List) instead |
org.apache.lucene.queryParser.precedence.PrecedenceQueryParser.getBooleanQuery(Vector)
use PrecedenceQueryParser.getBooleanQuery(List) instead |
org.apache.lucene.queryParser.QueryParser.getBooleanQuery(Vector, boolean)
use QueryParser.getBooleanQuery(List, boolean) instead |
org.apache.lucene.queryParser.precedence.PrecedenceQueryParser.getBooleanQuery(Vector, boolean)
use PrecedenceQueryParser.getBooleanQuery(List, boolean) instead |
org.apache.lucene.queryParser.CharStream.getColumn()
|
org.apache.lucene.demo.html.SimpleCharStream.getColumn()
|
org.apache.lucene.queryParser.precedence.CharStream.getColumn()
|
org.apache.lucene.queryParser.standard.parser.JavaCharStream.getColumn()
|
org.apache.lucene.index.IndexReader.getCurrentVersion(File)
Use IndexReader.getCurrentVersion(Directory) instead.
This method will be removed in the 3.0 release. |
org.apache.lucene.index.IndexReader.getCurrentVersion(String)
Use IndexReader.getCurrentVersion(Directory) instead.
This method will be removed in the 3.0 release. |
org.apache.lucene.search.FieldCache.getCustom(IndexReader, String, SortComparator)
Please implement FieldComparatorSource directly, instead. |
org.apache.lucene.analysis.standard.StandardAnalyzer.getDefaultReplaceInvalidAcronym()
This will be removed (hardwired to true) in 3.0 |
org.apache.lucene.store.FSDirectory.getDirectory(File)
Use FSDirectory.open(File) |
org.apache.lucene.store.FSDirectory.getDirectory(File, boolean)
Use IndexWriter's create flag, instead, to
create a new index. |
org.apache.lucene.store.FSDirectory.getDirectory(File, LockFactory)
Use FSDirectory.open(File, LockFactory) |
org.apache.lucene.store.FSDirectory.getDirectory(String)
Use FSDirectory.open(File) |
org.apache.lucene.store.FSDirectory.getDirectory(String, boolean)
Use IndexWriter's create flag, instead, to
create a new index. |
org.apache.lucene.store.FSDirectory.getDirectory(String, LockFactory)
Use FSDirectory.open(File, LockFactory) |
org.apache.lucene.index.IndexReader.getDisableFakeNorms()
This currently defaults to false (to remain
back-compatible), but in 3.0 it will be hardwired to
true, meaning the norms() methods will return null for
fields that had disabled norms. |
org.apache.lucene.store.FSDirectory.getDisableLocks()
Use a constructor that takes a LockFactory and
supply NoLockFactory.getNoLockFactory() . |
org.apache.lucene.search.ExtendedFieldCache.getDoubles(IndexReader, String, ExtendedFieldCache.DoubleParser)
Will be removed in 3.0, this is for binary compatibility only |
org.apache.lucene.analysis.StopFilter.getEnablePositionIncrementsDefault()
Please specify this when you create the StopFilter |
org.apache.lucene.search.SortField.getFactory()
use SortField.getComparatorSource() |
org.apache.lucene.index.IndexReader.getFieldCacheKey()
|
org.apache.lucene.search.vectorhighlight.BaseFragmentsBuilder.getFieldValues(IndexReader, int, String)
|
org.apache.lucene.search.vectorhighlight.BaseFragmentsBuilder.getFragmentSource(StringBuilder, int[], String[], int, int)
|
org.apache.lucene.queryParser.CharStream.getLine()
|
org.apache.lucene.demo.html.SimpleCharStream.getLine()
|
org.apache.lucene.queryParser.precedence.CharStream.getLine()
|
org.apache.lucene.queryParser.standard.parser.JavaCharStream.getLine()
|
org.apache.lucene.search.ExtendedFieldCache.getLongs(IndexReader, String, ExtendedFieldCache.LongParser)
Will be removed in 3.0, this is for binary compatibility only |
org.apache.lucene.search.highlight.Highlighter.getMaxDocBytesToAnalyze()
See Highlighter.getMaxDocCharsToAnalyze() , since this value has always counted on chars. They both set the same internal value, however |
org.apache.lucene.index.IndexWriter.getMaxSyncPauseSeconds()
This will be removed in 3.0, when
autoCommit=true is removed from IndexWriter. |
org.apache.lucene.document.AbstractField.getOmitTf()
Renamed to AbstractField.getOmitTermFreqAndPositions() |
org.apache.lucene.document.Fieldable.getOmitTf()
Renamed to AbstractField.getOmitTermFreqAndPositions() |
org.apache.lucene.analysis.TokenStream.getOnlyUseNewAPI()
This setting will no longer be needed in Lucene 3.0 as
the old API will be removed. |
org.apache.lucene.analysis.nl.WordlistLoader.getStemDict(File)
use WordlistLoader.getStemDict(File) instead |
org.apache.lucene.search.MultiTermQuery.getTerm()
check sub class for possible term access - getTerm does not
make sense for all MultiTermQuerys and will be removed. |
org.apache.lucene.search.spans.SpanOrQuery.getTerms()
use extractTerms instead |
org.apache.lucene.search.spans.SpanNotQuery.getTerms()
use extractTerms instead |
org.apache.lucene.search.spans.SpanFirstQuery.getTerms()
use extractTerms instead |
org.apache.lucene.search.spans.FieldMaskingSpanQuery.getTerms()
use FieldMaskingSpanQuery.extractTerms(Set) instead. |
org.apache.lucene.search.spans.SpanNearQuery.getTerms()
use extractTerms instead |
org.apache.lucene.search.spans.SpanTermQuery.getTerms()
use extractTerms instead |
org.apache.lucene.search.spans.SpanQuery.getTerms()
use extractTerms instead |
org.apache.lucene.search.SortField.getUseLegacySearch()
will be removed in Lucene 3.0. |
org.apache.lucene.queryParser.QueryParser.getUseOldRangeQuery()
Please use QueryParser.getMultiTermRewriteMethod() instead. |
org.apache.lucene.search.BooleanQuery.getUseScorer14()
Use BooleanQuery.getAllowDocsOutOfOrder() instead. |
org.apache.lucene.analysis.nl.WordlistLoader.getWordtable(File)
use WordlistLoader.getWordSet(File) instead |
org.apache.lucene.analysis.nl.WordlistLoader.getWordtable(String)
use WordlistLoader.getWordSet(File) instead |
org.apache.lucene.analysis.nl.WordlistLoader.getWordtable(String, String)
use WordlistLoader.getWordSet(File) instead |
org.apache.lucene.search.Similarity.idf(Collection, Searcher)
see Similarity.idfExplain(Collection, Searcher) |
org.apache.lucene.search.Similarity.idf(Term, Searcher)
see Similarity.idfExplain(Term, Searcher) |
org.apache.lucene.index.IndexReader.indexExists(File)
Use IndexReader.indexExists(Directory) instead.
This method will be removed in the 3.0 release. |
org.apache.lucene.index.IndexReader.indexExists(String)
Use IndexReader.indexExists(Directory) instead
This method will be removed in the 3.0 release. |
org.apache.lucene.util.PriorityQueue.insert(Object)
use PriorityQueue.insertWithOverflow(Object) instead, which
encourages objects reuse. |
org.apache.lucene.index.IndexReader.isLocked(Directory)
Please use IndexWriter.isLocked(Directory) instead.
This method will be removed in the 3.0 release. |
org.apache.lucene.index.IndexWriter.isLocked(String)
Use IndexWriter.isLocked(Directory) |
org.apache.lucene.index.IndexReader.isLocked(String)
Use IndexReader.isLocked(Directory) instead.
This method will be removed in the 3.0 release. |
org.apache.lucene.analysis.standard.StandardTokenizer.isReplaceInvalidAcronym()
Remove in 3.X and make true the only valid value |
org.apache.lucene.analysis.standard.StandardAnalyzer.isReplaceInvalidAcronym()
This will be removed (hardwired to true) in 3.0 |
org.apache.lucene.index.IndexReader.lastModified(File)
Use IndexReader.lastModified(Directory) instead.
This method will be removed in the 3.0 release. |
org.apache.lucene.index.IndexReader.lastModified(String)
Use IndexReader.lastModified(Directory) instead.
This method will be removed in the 3.0 release. |
org.apache.lucene.store.Directory.list()
For some Directory implementations (FSDirectory , and its subclasses), this method
silently filters its results to include only index
files. Please use Directory.listAll() instead, which
does no filtering. |
org.apache.lucene.search.vectorhighlight.BaseFragmentsBuilder.makeFragment(StringBuilder, int[], String[], FieldFragList.WeightedFragInfo)
|
org.apache.lucene.analysis.KeywordTokenizer.next()
Will be removed in Lucene 3.0. This method is final, as it should
not be overridden. Delegates to the backwards compatibility layer. |
org.apache.lucene.analysis.ISOLatin1AccentFilter.next()
Will be removed in Lucene 3.0. This method is final, as it should
not be overridden. Delegates to the backwards compatibility layer. |
org.apache.lucene.analysis.CharTokenizer.next()
Will be removed in Lucene 3.0. This method is final, as it should
not be overridden. Delegates to the backwards compatibility layer. |
org.apache.lucene.analysis.CachingTokenFilter.next()
Will be removed in Lucene 3.0. This method is final, as it should
not be overridden. Delegates to the backwards compatibility layer. |
org.apache.lucene.analysis.TokenStream.next()
The returned Token is a "full private copy" (not re-used across
calls to TokenStream.next() ) but will be slower than calling
TokenStream.next(Token) or using the new TokenStream.incrementToken()
method with the new AttributeSource API. |
org.apache.lucene.analysis.standard.StandardTokenizer.next()
Will be removed in Lucene 3.0. This method is final, as it should
not be overridden. Delegates to the backwards compatibility layer. |
org.apache.lucene.search.ScoreCachingWrappingScorer.next()
use ScoreCachingWrappingScorer.nextDoc() instead. |
org.apache.lucene.search.FilteredDocIdSetIterator.next()
use FilteredDocIdSetIterator.nextDoc() instead. |
org.apache.lucene.search.FieldCacheTermsFilter.FieldCacheTermsFilterDocIdSet.FieldCacheTermsFilterDocIdSetIterator.next()
use FieldCacheTermsFilter.FieldCacheTermsFilterDocIdSet.FieldCacheTermsFilterDocIdSetIterator.nextDoc() instead. |
org.apache.lucene.search.ConstantScoreQuery.ConstantScorer.next()
use ConstantScoreQuery.ConstantScorer.nextDoc() instead. |
org.apache.lucene.search.DocIdSetIterator.next()
use DocIdSetIterator.nextDoc() instead. This will be removed in 3.0 |
org.apache.lucene.search.spans.SpanScorer.next()
use SpanScorer.nextDoc() instead. |
org.apache.lucene.util.OpenBitSetIterator.next()
use OpenBitSetIterator.nextDoc() instead. |
org.apache.lucene.analysis.compound.CompoundWordTokenFilterBase.next()
Will be removed in Lucene 3.0. This method is final, as it should
not be overridden. Delegates to the backwards compatibility layer. |
org.apache.lucene.analysis.fr.ElisionFilter.next()
Will be removed in Lucene 3.0. This method is final, as it should
not be overridden. Delegates to the backwards compatibility layer. |
org.apache.lucene.analysis.miscellaneous.PrefixAndSuffixAwareTokenFilter.next()
Will be removed in Lucene 3.0. This method is final, as it should
not be overridden. Delegates to the backwards compatibility layer. |
org.apache.lucene.analysis.miscellaneous.EmptyTokenStream.next()
Will be removed in Lucene 3.0. This method is final, as it should
not be overridden. Delegates to the backwards compatibility layer. |
org.apache.lucene.analysis.miscellaneous.PrefixAwareTokenFilter.next()
Will be removed in Lucene 3.0. This method is final, as it should
not be overridden. Delegates to the backwards compatibility layer. |
org.apache.lucene.analysis.miscellaneous.SingleTokenTokenStream.next()
Will be removed in Lucene 3.0. This method is final, as it should
not be overridden. Delegates to the backwards compatibility layer. |
org.apache.lucene.analysis.ngram.NGramTokenFilter.next()
Will be removed in Lucene 3.0. This method is final, as it should
not be overridden. Delegates to the backwards compatibility layer. |
org.apache.lucene.analysis.ngram.EdgeNGramTokenizer.next()
Will be removed in Lucene 3.0. This method is final, as it should
not be overridden. Delegates to the backwards compatibility layer. |
org.apache.lucene.analysis.ngram.EdgeNGramTokenFilter.next()
Will be removed in Lucene 3.0. This method is final, as it should
not be overridden. Delegates to the backwards compatibility layer. |
org.apache.lucene.analysis.ngram.NGramTokenizer.next()
Will be removed in Lucene 3.0. This method is final, as it should
not be overridden. Delegates to the backwards compatibility layer. |
org.apache.lucene.analysis.payloads.TokenOffsetPayloadTokenFilter.next()
Will be removed in Lucene 3.0. This method is final, as it should
not be overridden. Delegates to the backwards compatibility layer. |
org.apache.lucene.analysis.payloads.TypeAsPayloadTokenFilter.next()
Will be removed in Lucene 3.0. This method is final, as it should
not be overridden. Delegates to the backwards compatibility layer. |
org.apache.lucene.analysis.payloads.NumericPayloadTokenFilter.next()
Will be removed in Lucene 3.0. This method is final, as it should
not be overridden. Delegates to the backwards compatibility layer. |
org.apache.lucene.analysis.position.PositionFilter.next()
Will be removed in Lucene 3.0. This method is final, as it should
not be overridden. Delegates to the backwards compatibility layer. |
org.apache.lucene.analysis.shingle.ShingleMatrixFilter.next()
Will be removed in Lucene 3.0. This method is final, as it should
not be overridden. Delegates to the backwards compatibility layer. |
org.apache.lucene.analysis.shingle.ShingleFilter.next()
Will be removed in Lucene 3.0. This method is final, as it should
not be overridden. Delegates to the backwards compatibility layer. |
org.apache.lucene.analysis.th.ThaiWordFilter.next()
Will be removed in Lucene 3.0. This method is final, as it should
not be overridden. Delegates to the backwards compatibility layer. |
org.apache.lucene.analysis.snowball.SnowballFilter.next()
Will be removed in Lucene 3.0. This method is final, as it should
not be overridden. Delegates to the backwards compatibility layer. |
org.apache.lucene.wikipedia.analysis.WikipediaTokenizer.next()
Will be removed in Lucene 3.0. This method is final, as it should
not be overridden. Delegates to the backwards compatibility layer. |
org.apache.lucene.analysis.KeywordTokenizer.next(Token)
Will be removed in Lucene 3.0. This method is final, as it should
not be overridden. Delegates to the backwards compatibility layer. |
org.apache.lucene.analysis.ISOLatin1AccentFilter.next(Token)
Will be removed in Lucene 3.0. This method is final, as it should
not be overridden. Delegates to the backwards compatibility layer. |
org.apache.lucene.analysis.CharTokenizer.next(Token)
Will be removed in Lucene 3.0. This method is final, as it should
not be overridden. Delegates to the backwards compatibility layer. |
org.apache.lucene.analysis.CachingTokenFilter.next(Token)
Will be removed in Lucene 3.0. This method is final, as it should
not be overridden. Delegates to the backwards compatibility layer. |
org.apache.lucene.analysis.TokenStream.next(Token)
The new TokenStream.incrementToken() and AttributeSource
APIs should be used instead. |
org.apache.lucene.analysis.standard.StandardTokenizer.next(Token)
Will be removed in Lucene 3.0. This method is final, as it should
not be overridden. Delegates to the backwards compatibility layer. |
org.apache.lucene.analysis.compound.CompoundWordTokenFilterBase.next(Token)
Will be removed in Lucene 3.0. This method is final, as it should
not be overridden. Delegates to the backwards compatibility layer. |
org.apache.lucene.analysis.fr.ElisionFilter.next(Token)
Will be removed in Lucene 3.0. This method is final, as it should
not be overridden. Delegates to the backwards compatibility layer. |
org.apache.lucene.analysis.miscellaneous.PrefixAndSuffixAwareTokenFilter.next(Token)
Will be removed in Lucene 3.0. This method is final, as it should
not be overridden. Delegates to the backwards compatibility layer. |
org.apache.lucene.analysis.miscellaneous.EmptyTokenStream.next(Token)
Will be removed in Lucene 3.0. This method is final, as it should
not be overridden. Delegates to the backwards compatibility layer. |
org.apache.lucene.analysis.miscellaneous.PrefixAwareTokenFilter.next(Token)
Will be removed in Lucene 3.0. This method is final, as it should
not be overridden. Delegates to the backwards compatibility layer. |
org.apache.lucene.analysis.miscellaneous.SingleTokenTokenStream.next(Token)
Will be removed in Lucene 3.0. This method is final, as it should
not be overridden. Delegates to the backwards compatibility layer. |
org.apache.lucene.analysis.ngram.NGramTokenFilter.next(Token)
Will be removed in Lucene 3.0. This method is final, as it should
not be overridden. Delegates to the backwards compatibility layer. |
org.apache.lucene.analysis.ngram.EdgeNGramTokenizer.next(Token)
Will be removed in Lucene 3.0. This method is final, as it should
not be overridden. Delegates to the backwards compatibility layer. |
org.apache.lucene.analysis.ngram.EdgeNGramTokenFilter.next(Token)
Will be removed in Lucene 3.0. This method is final, as it should
not be overridden. Delegates to the backwards compatibility layer. |
org.apache.lucene.analysis.ngram.NGramTokenizer.next(Token)
Will be removed in Lucene 3.0. This method is final, as it should
not be overridden. Delegates to the backwards compatibility layer. |
org.apache.lucene.analysis.payloads.TokenOffsetPayloadTokenFilter.next(Token)
Will be removed in Lucene 3.0. This method is final, as it should
not be overridden. Delegates to the backwards compatibility layer. |
org.apache.lucene.analysis.payloads.TypeAsPayloadTokenFilter.next(Token)
Will be removed in Lucene 3.0. This method is final, as it should
not be overridden. Delegates to the backwards compatibility layer. |
org.apache.lucene.analysis.payloads.NumericPayloadTokenFilter.next(Token)
Will be removed in Lucene 3.0. This method is final, as it should
not be overridden. Delegates to the backwards compatibility layer. |
org.apache.lucene.analysis.position.PositionFilter.next(Token)
Will be removed in Lucene 3.0. This method is final, as it should
not be overridden. Delegates to the backwards compatibility layer. |
org.apache.lucene.analysis.shingle.ShingleMatrixFilter.next(Token)
Will be removed in Lucene 3.0. This method is final, as it should
not be overridden. Delegates to the backwards compatibility layer. |
org.apache.lucene.analysis.shingle.ShingleFilter.next(Token)
Will be removed in Lucene 3.0. This method is final, as it should
not be overridden. Delegates to the backwards compatibility layer. |
org.apache.lucene.analysis.th.ThaiWordFilter.next(Token)
Will be removed in Lucene 3.0. This method is final, as it should
not be overridden. Delegates to the backwards compatibility layer. |
org.apache.lucene.analysis.snowball.SnowballFilter.next(Token)
Will be removed in Lucene 3.0. This method is final, as it should
not be overridden. Delegates to the backwards compatibility layer. |
org.apache.lucene.wikipedia.analysis.WikipediaTokenizer.next(Token)
Will be removed in Lucene 3.0. This method is final, as it should
not be overridden. Delegates to the backwards compatibility layer. |
org.apache.lucene.index.IndexReader.open(Directory)
Use IndexReader.open(Directory, boolean) instead
This method will be removed in the 3.0 release. |
org.apache.lucene.index.IndexReader.open(Directory, IndexDeletionPolicy)
Use IndexReader.open(Directory, IndexDeletionPolicy, boolean) instead.
This method will be removed in the 3.0 release. |
org.apache.lucene.index.IndexReader.open(File)
Use IndexReader.open(Directory, boolean) instead.
This method will be removed in the 3.0 release. |
org.apache.lucene.index.IndexReader.open(File, boolean)
Use IndexReader.open(Directory, boolean) instead.
This method will be removed in the 3.0 release. |
org.apache.lucene.index.IndexReader.open(IndexCommit)
Use IndexReader.open(IndexCommit, boolean) instead.
This method will be removed in the 3.0 release. |
org.apache.lucene.index.IndexReader.open(IndexCommit, IndexDeletionPolicy)
Use IndexReader.open(IndexCommit, IndexDeletionPolicy, boolean) instead.
This method will be removed in the 3.0 release. |
org.apache.lucene.index.IndexReader.open(String)
Use IndexReader.open(Directory, boolean) instead.
This method will be removed in the 3.0 release. |
org.apache.lucene.index.IndexReader.open(String, boolean)
Use IndexReader.open(Directory, boolean) instead.
This method will be removed in the 3.0 release. |
org.apache.lucene.queryParser.MultiFieldQueryParser.parse(String[], String[], Analyzer)
Use MultiFieldQueryParser.parse(Version,String[],String[],Analyzer) instead |
org.apache.lucene.queryParser.MultiFieldQueryParser.parse(String[], String[], BooleanClause.Occur[], Analyzer)
Used MultiFieldQueryParser.parse(Version, String[], String[], BooleanClause.Occur[], Analyzer) instead |
org.apache.lucene.queryParser.MultiFieldQueryParser.parse(String, String[], BooleanClause.Occur[], Analyzer)
Use MultiFieldQueryParser.parse(Version, String, String[], BooleanClause.Occur[], Analyzer) instead |
org.apache.lucene.util.PriorityQueue.put(Object)
use PriorityQueue.add(Object) which returns the new top object,
saving an additional call to PriorityQueue.top() . |
org.apache.lucene.store.IndexInput.readChars(char[], int, int)
-- please use readString or readBytes
instead, and construct the string
from those utf8 bytes |
org.apache.lucene.store.FSDirectory.renameFile(String, String)
|
org.apache.lucene.store.RAMDirectory.renameFile(String, String)
|
org.apache.lucene.store.Directory.renameFile(String, String)
|
org.apache.lucene.analysis.standard.StandardAnalyzer.reusableTokenStream(String, Reader)
Use StandardAnalyzer.tokenStream(java.lang.String, java.io.Reader) instead |
org.apache.lucene.search.Scorer.score(HitCollector)
use Scorer.score(Collector) instead. |
org.apache.lucene.search.Scorer.score(HitCollector, int)
use Scorer.score(Collector, int, int) instead. |
org.apache.lucene.search.Similarity.scorePayload(String, byte[], int, int)
See Similarity.scorePayload(int, String, int, int, byte[], int, int) |
org.apache.lucene.search.Searcher.search(Query)
Hits will be removed in Lucene 3.0. Use
Searcher.search(Query, Filter, int) instead. |
org.apache.lucene.search.Searcher.search(Query, Filter)
Hits will be removed in Lucene 3.0. Use
Searcher.search(Query, Filter, int) instead. |
org.apache.lucene.search.Searcher.search(Query, Filter, HitCollector)
use Searcher.search(Query, Filter, Collector) instead. |
org.apache.lucene.search.Searcher.search(Query, Filter, Sort)
Hits will be removed in Lucene 3.0. Use
Searcher.search(Query, Filter, int, Sort) instead. |
org.apache.lucene.search.Searcher.search(Query, HitCollector)
use Searcher.search(Query, Collector) instead. |
org.apache.lucene.search.Searcher.search(Query, Sort)
Hits will be removed in Lucene 3.0. Use
Searcher.search(Query, Filter, int, Sort) instead. |
org.apache.lucene.search.RemoteSearchable.search(Weight, Filter, HitCollector)
use RemoteSearchable.search(Weight, Filter, Collector) instead. |
org.apache.lucene.search.Searchable.search(Weight, Filter, HitCollector)
use Searchable.search(Weight, Filter, Collector) instead. |
org.apache.lucene.search.Searcher.search(Weight, Filter, HitCollector)
use Searcher.search(Weight, Filter, Collector) instead. |
org.apache.lucene.search.BooleanQuery.setAllowDocsOutOfOrder(boolean)
this is not needed anymore, as
Weight.scoresDocsOutOfOrder() is used. |
org.apache.lucene.analysis.standard.StandardAnalyzer.setDefaultReplaceInvalidAcronym(boolean)
This will be removed (hardwired to true) in 3.0 |
org.apache.lucene.index.IndexReader.setDisableFakeNorms(boolean)
This currently defaults to false (to remain
back-compatible), but in 3.0 it will be hardwired to
true, meaning the norms() methods will return null for
fields that had disabled norms. |
org.apache.lucene.store.FSDirectory.setDisableLocks(boolean)
Use a FSDirectory.open(File, LockFactory) or a constructor
that takes a LockFactory and supply
NoLockFactory.getNoLockFactory() . This setting does not work
with FSDirectory.open(File) only the deprecated getDirectory
respect this setting. |
org.apache.lucene.analysis.StopFilter.setEnablePositionIncrementsDefault(boolean)
Please specify this when you create the StopFilter |
org.apache.lucene.search.highlight.Highlighter.setMaxDocBytesToAnalyze(int)
See Highlighter.setMaxDocCharsToAnalyze(int) , since this value has always counted chars |
org.apache.lucene.index.IndexWriter.setMaxSyncPauseSeconds(double)
This will be removed in 3.0, when
autoCommit=true is removed from IndexWriter. |
org.apache.lucene.document.AbstractField.setOmitTf(boolean)
Renamed to AbstractField.setOmitTermFreqAndPositions(boolean) |
org.apache.lucene.document.Fieldable.setOmitTf(boolean)
Renamed to AbstractField.setOmitTermFreqAndPositions(boolean) |
org.apache.lucene.analysis.TokenStream.setOnlyUseNewAPI(boolean)
This setting will no longer be needed in Lucene 3.0 as the old
API will be removed. |
org.apache.lucene.analysis.Analyzer.setOverridesTokenStreamMethod(Class)
This is only present to preserve
back-compat of classes that subclass a core analyzer
and override tokenStream but not reusableTokenStream |
org.apache.lucene.analysis.standard.StandardTokenizer.setReplaceInvalidAcronym(boolean)
Remove in 3.X and make true the only valid value
See https://issues.apache.org/jira/browse/LUCENE-1068 |
org.apache.lucene.analysis.standard.StandardAnalyzer.setReplaceInvalidAcronym(boolean)
This will be removed (hardwired to true) in 3.0 |
org.apache.lucene.search.Sort.setSort(String)
Please specify the type explicitly by
first creating a SortField and then use Sort.setSort(SortField) |
org.apache.lucene.search.Sort.setSort(String[])
Please specify the type explicitly by
first creating SortField s and then use Sort.setSort(SortField[]) |
org.apache.lucene.search.Sort.setSort(String, boolean)
Please specify the type explicitly by
first creating a SortField and then use Sort.setSort(SortField) |
org.apache.lucene.index.IndexReader.setTermInfosIndexDivisor(int)
Please use IndexReader.open(Directory, IndexDeletionPolicy, boolean, int) to specify the required TermInfos index divisor instead. |
org.apache.lucene.analysis.Token.setTermText(String)
use Token.setTermBuffer(char[], int, int) or
Token.setTermBuffer(String) or
Token.setTermBuffer(String, int, int) . |
org.apache.lucene.search.SortField.setUseLegacySearch(boolean)
will be removed in Lucene 3.0. |
org.apache.lucene.queryParser.QueryParser.setUseOldRangeQuery(boolean)
Please use QueryParser.setMultiTermRewriteMethod(org.apache.lucene.search.MultiTermQuery.RewriteMethod) instead. |
org.apache.lucene.search.BooleanQuery.setUseScorer14(boolean)
Use BooleanQuery.setAllowDocsOutOfOrder(boolean) instead. |
org.apache.lucene.document.Field.setValue(TokenStream)
use Field.setTokenStream(org.apache.lucene.analysis.TokenStream) |
org.apache.lucene.store.IndexInput.skipChars(int)
this method operates on old "modified utf8" encoded
strings |
org.apache.lucene.search.ScoreCachingWrappingScorer.skipTo(int)
use ScoreCachingWrappingScorer.advance(int) instead. |
org.apache.lucene.search.FilteredDocIdSetIterator.skipTo(int)
use FilteredDocIdSetIterator.advance(int) instead. |
org.apache.lucene.search.FieldCacheTermsFilter.FieldCacheTermsFilterDocIdSet.FieldCacheTermsFilterDocIdSetIterator.skipTo(int)
use FieldCacheTermsFilter.FieldCacheTermsFilterDocIdSet.FieldCacheTermsFilterDocIdSetIterator.advance(int) instead. |
org.apache.lucene.search.ConstantScoreQuery.ConstantScorer.skipTo(int)
use ConstantScoreQuery.ConstantScorer.advance(int) instead. |
org.apache.lucene.search.DocIdSetIterator.skipTo(int)
use DocIdSetIterator.advance(int) instead. This will be removed in 3.0 |
org.apache.lucene.search.spans.SpanScorer.skipTo(int)
use SpanScorer.advance(int) instead. |
org.apache.lucene.util.OpenBitSetIterator.skipTo(int)
use OpenBitSetIterator.advance(int) instead. |
org.apache.lucene.index.TermEnum.skipTo(Term)
This method is not performant and will be removed in Lucene 3.0.
Use IndexReader.terms(Term) to create a new TermEnum positioned at a
given term. |
org.apache.lucene.analysis.Token.termText()
This method now has a performance penalty
because the text is stored internally in a char[]. If
possible, use Token.termBuffer() and Token.termLength() directly instead. If you really need a
String, use Token.term() |
org.apache.lucene.index.IndexReader.unlock(Directory)
Please use IndexWriter.unlock(Directory) instead.
This method will be removed in the 3.0 release. |
org.apache.lucene.store.IndexOutput.writeChars(char[], int, int)
-- please pre-convert to utf8 bytes instead or use IndexOutput.writeString(java.lang.String) |
org.apache.lucene.store.IndexOutput.writeChars(String, int, int)
-- please pre-convert to utf8 bytes
instead or use IndexOutput.writeString(java.lang.String) |
Deprecated Constructors |
org.apache.lucene.queryParser.analyzing.AnalyzingQueryParser(String, Analyzer)
Use AnalyzingQueryParser.AnalyzingQueryParser(Version,
String, Analyzer) instead |
org.apache.lucene.analysis.ar.ArabicAnalyzer()
Use ArabicAnalyzer.ArabicAnalyzer(Version) instead |
org.apache.lucene.analysis.ar.ArabicAnalyzer(File)
Use ArabicAnalyzer.ArabicAnalyzer(Version, File) instead |
org.apache.lucene.analysis.ar.ArabicAnalyzer(Hashtable)
Use ArabicAnalyzer.ArabicAnalyzer(Version, Hashtable) instead |
org.apache.lucene.analysis.ar.ArabicAnalyzer(String[])
Use ArabicAnalyzer.ArabicAnalyzer(Version, String[]) instead |
org.apache.lucene.analysis.br.BrazilianAnalyzer()
Use BrazilianAnalyzer.BrazilianAnalyzer(Version) instead |
org.apache.lucene.analysis.br.BrazilianAnalyzer(File)
Use BrazilianAnalyzer.BrazilianAnalyzer(Version, File) instead |
org.apache.lucene.analysis.br.BrazilianAnalyzer(Map)
Use BrazilianAnalyzer.BrazilianAnalyzer(Version, Map) instead |
org.apache.lucene.analysis.br.BrazilianAnalyzer(String[])
Use BrazilianAnalyzer.BrazilianAnalyzer(Version, String[]) instead |
org.apache.lucene.analysis.cjk.CJKAnalyzer()
Use CJKAnalyzer.CJKAnalyzer(Version) instead |
org.apache.lucene.analysis.cjk.CJKAnalyzer(String[])
Use CJKAnalyzer.CJKAnalyzer(Version, String[]) instead |
org.apache.lucene.queryParser.complexPhrase.ComplexPhraseQueryParser(String, Analyzer)
Use ComplexPhraseQueryParser.ComplexPhraseQueryParser(Version, String, Analyzer)
instead. |
org.apache.lucene.analysis.cz.CzechAnalyzer()
Use CzechAnalyzer.CzechAnalyzer(Version) instead |
org.apache.lucene.analysis.cz.CzechAnalyzer(File)
Use CzechAnalyzer.CzechAnalyzer(Version, File) instead |
org.apache.lucene.analysis.cz.CzechAnalyzer(HashSet)
Use CzechAnalyzer.CzechAnalyzer(Version, HashSet) instead |
org.apache.lucene.analysis.cz.CzechAnalyzer(String[])
Use CzechAnalyzer.CzechAnalyzer(Version, String[]) instead |
org.apache.lucene.analysis.nl.DutchAnalyzer()
Use DutchAnalyzer.DutchAnalyzer(Version) instead |
org.apache.lucene.analysis.nl.DutchAnalyzer(File)
Use DutchAnalyzer.DutchAnalyzer(Version, File) instead |
org.apache.lucene.analysis.nl.DutchAnalyzer(HashSet)
Use DutchAnalyzer.DutchAnalyzer(Version, HashSet) instead |
org.apache.lucene.analysis.nl.DutchAnalyzer(String[])
Use DutchAnalyzer.DutchAnalyzer(Version, String[]) instead |
org.apache.lucene.analysis.fr.FrenchAnalyzer()
Use FrenchAnalyzer.FrenchAnalyzer(Version) instead. |
org.apache.lucene.analysis.fr.FrenchAnalyzer(File)
Use FrenchAnalyzer.FrenchAnalyzer(Version, File) instead |
org.apache.lucene.analysis.fr.FrenchAnalyzer(String[])
Use FrenchAnalyzer.FrenchAnalyzer(Version,
String[]) instead. |
org.apache.lucene.store.FSDirectory.FSIndexInput.Descriptor(File, String)
|
org.apache.lucene.store.FSDirectory.FSIndexInput(File)
|
org.apache.lucene.store.FSDirectory.FSIndexInput(File, int)
|
org.apache.lucene.store.FSDirectory.FSIndexOutput(File)
|
org.apache.lucene.store.FSDirectory()
|
org.apache.lucene.analysis.de.GermanAnalyzer()
Use GermanAnalyzer.GermanAnalyzer(Version) instead |
org.apache.lucene.analysis.de.GermanAnalyzer(File)
Use GermanAnalyzer.GermanAnalyzer(Version, File) instead |
org.apache.lucene.analysis.de.GermanAnalyzer(Map)
Use GermanAnalyzer.GermanAnalyzer(Version, Map) instead |
org.apache.lucene.analysis.de.GermanAnalyzer(String[])
Use GermanAnalyzer.GermanAnalyzer(Version, String[]) instead |
org.apache.lucene.analysis.el.GreekAnalyzer()
Use GreekAnalyzer.GreekAnalyzer(Version) instead |
org.apache.lucene.analysis.el.GreekAnalyzer(char[])
Use GreekAnalyzer.GreekAnalyzer(Version) instead. |
org.apache.lucene.analysis.el.GreekAnalyzer(char[], Map)
Use GreekAnalyzer.GreekAnalyzer(Version, Map) instead. |
org.apache.lucene.analysis.el.GreekAnalyzer(char[], String[])
Use GreekAnalyzer.GreekAnalyzer(Version, String[]) instead. |
org.apache.lucene.analysis.el.GreekAnalyzer(Map)
Use GreekAnalyzer.GreekAnalyzer(Version,Map) instead |
org.apache.lucene.analysis.el.GreekAnalyzer(String[])
Use GreekAnalyzer.GreekAnalyzer(Version, String[]) instead |
org.apache.lucene.analysis.el.GreekLowerCaseFilter(TokenStream, char[])
Use GreekLowerCaseFilter.GreekLowerCaseFilter(TokenStream) instead. |
org.apache.lucene.demo.html.HTMLParser(File)
Use HTMLParser(FileInputStream) instead |
org.apache.lucene.index.IndexReader(Directory)
- use IndexReader() |
org.apache.lucene.search.IndexSearcher(Directory)
Use IndexSearcher.IndexSearcher(Directory, boolean) instead |
org.apache.lucene.search.IndexSearcher(String)
Use IndexSearcher.IndexSearcher(Directory, boolean) instead |
org.apache.lucene.search.IndexSearcher(String, boolean)
Use IndexSearcher.IndexSearcher(Directory, boolean) instead |
org.apache.lucene.index.IndexWriter(Directory, Analyzer)
This constructor will be removed in the 3.0 release.
Use IndexWriter.IndexWriter(Directory,Analyzer,MaxFieldLength)
instead, and call IndexWriter.commit() when needed. |
org.apache.lucene.index.IndexWriter(Directory, Analyzer, boolean)
This constructor will be removed in the 3.0
release, and call IndexWriter.commit() when needed.
Use IndexWriter.IndexWriter(Directory,Analyzer,boolean,MaxFieldLength) instead. |
org.apache.lucene.index.IndexWriter(Directory, boolean, Analyzer)
This constructor will be removed in the 3.0 release.
Use IndexWriter.IndexWriter(Directory,Analyzer,MaxFieldLength)
instead, and call IndexWriter.commit() when needed. |
org.apache.lucene.index.IndexWriter(Directory, boolean, Analyzer, boolean)
This constructor will be removed in the 3.0 release.
Use IndexWriter.IndexWriter(Directory,Analyzer,boolean,MaxFieldLength)
instead, and call IndexWriter.commit() when needed. |
org.apache.lucene.index.IndexWriter(Directory, boolean, Analyzer, boolean, IndexDeletionPolicy)
This constructor will be removed in the 3.0 release.
Use IndexWriter.IndexWriter(Directory,Analyzer,boolean,IndexDeletionPolicy,MaxFieldLength)
instead, and call IndexWriter.commit() when needed. |
org.apache.lucene.index.IndexWriter(Directory, boolean, Analyzer, IndexDeletionPolicy)
This constructor will be removed in the 3.0 release.
Use IndexWriter.IndexWriter(Directory,Analyzer,IndexDeletionPolicy,MaxFieldLength)
instead, and call IndexWriter.commit() when needed. |
org.apache.lucene.index.IndexWriter(File, Analyzer)
This constructor will be removed in the 3.0 release.
Use IndexWriter.IndexWriter(Directory,Analyzer,MaxFieldLength)
instead, and call IndexWriter.commit() when needed. |
org.apache.lucene.index.IndexWriter(File, Analyzer, boolean)
This constructor will be removed in the 3.0 release.
Use IndexWriter.IndexWriter(Directory,Analyzer,boolean,MaxFieldLength)
instead, and call IndexWriter.commit() when needed. |
org.apache.lucene.index.IndexWriter(File, Analyzer, boolean, IndexWriter.MaxFieldLength)
Use IndexWriter.IndexWriter(Directory,
Analyzer, boolean, MaxFieldLength) |
org.apache.lucene.index.IndexWriter(File, Analyzer, IndexWriter.MaxFieldLength)
Use IndexWriter.IndexWriter(Directory,
Analyzer, MaxFieldLength) |
org.apache.lucene.index.IndexWriter(String, Analyzer)
This constructor will be removed in the 3.0
release, and call IndexWriter.commit() when needed.
Use IndexWriter.IndexWriter(Directory,Analyzer,MaxFieldLength) instead. |
org.apache.lucene.index.IndexWriter(String, Analyzer, boolean)
This constructor will be removed in the 3.0 release.
Use IndexWriter.IndexWriter(Directory,Analyzer,boolean,MaxFieldLength)
instead, and call IndexWriter.commit() when needed. |
org.apache.lucene.index.IndexWriter(String, Analyzer, boolean, IndexWriter.MaxFieldLength)
Use IndexWriter.IndexWriter(Directory, Analyzer,
boolean, MaxFieldLength) |
org.apache.lucene.index.IndexWriter(String, Analyzer, IndexWriter.MaxFieldLength)
Use IndexWriter.IndexWriter(Directory, Analyzer, MaxFieldLength) |
org.apache.lucene.index.MergePolicy.MergeException(String)
Use MergePolicy.MergeException.MergePolicy.MergeException(String,Directory) instead |
org.apache.lucene.index.MergePolicy.MergeException(Throwable)
Use MergePolicy.MergeException.MergePolicy.MergeException(Throwable,Directory) instead |
org.apache.lucene.queryParser.MultiFieldQueryParser(String[], Analyzer)
Please use MultiFieldQueryParser.MultiFieldQueryParser(Version, String[], Analyzer) instead |
org.apache.lucene.queryParser.MultiFieldQueryParser(String[], Analyzer, Map)
Please use MultiFieldQueryParser.MultiFieldQueryParser(Version, String[], Analyzer, Map) instead |
org.apache.lucene.search.MultiTermQuery(Term)
check sub class for possible term access - the Term does not
make sense for all MultiTermQuerys and will be removed. |
org.apache.lucene.store.NIOFSDirectory.NIOFSIndexInput(File, int)
Please use ctor taking chunkSize |
org.apache.lucene.index.memory.PatternAnalyzer(Pattern, boolean, Set)
Use PatternAnalyzer.PatternAnalyzer(Version, Pattern, boolean, Set) instead |
org.apache.lucene.analysis.fa.PersianAnalyzer()
Use PersianAnalyzer.PersianAnalyzer(Version) instead |
org.apache.lucene.analysis.fa.PersianAnalyzer(File)
Use PersianAnalyzer.PersianAnalyzer(Version, File) instead |
org.apache.lucene.analysis.fa.PersianAnalyzer(Hashtable)
Use PersianAnalyzer.PersianAnalyzer(Version, Hashtable) instead |
org.apache.lucene.analysis.fa.PersianAnalyzer(String[])
Use PersianAnalyzer.PersianAnalyzer(Version, String[]) instead |
org.apache.lucene.analysis.query.QueryAutoStopWordAnalyzer(Analyzer)
Use QueryAutoStopWordAnalyzer.QueryAutoStopWordAnalyzer(Version, Analyzer) instead |
org.apache.lucene.queryParser.QueryParser(String, Analyzer)
Use QueryParser.QueryParser(Version, String, Analyzer) instead |
org.apache.lucene.store.RAMDirectory(File)
Use RAMDirectory.RAMDirectory(Directory) instead |
org.apache.lucene.store.RAMDirectory(String)
Use RAMDirectory.RAMDirectory(Directory) instead |
org.apache.lucene.analysis.ru.RussianAnalyzer()
Use RussianAnalyzer.RussianAnalyzer(Version) instead |
org.apache.lucene.analysis.ru.RussianAnalyzer(char[])
Use RussianAnalyzer.RussianAnalyzer(Version) instead. |
org.apache.lucene.analysis.ru.RussianAnalyzer(char[], Map)
Use RussianAnalyzer.RussianAnalyzer(Version, Map) instead. |
org.apache.lucene.analysis.ru.RussianAnalyzer(char[], String[])
Use RussianAnalyzer.RussianAnalyzer(Version,String[]) instead. |
org.apache.lucene.analysis.ru.RussianAnalyzer(Map)
Use RussianAnalyzer.RussianAnalyzer(Version, Map) instead. |
org.apache.lucene.analysis.ru.RussianAnalyzer(String[])
Use RussianAnalyzer.RussianAnalyzer(Version,String[]) instead. |
org.apache.lucene.analysis.ru.RussianLetterTokenizer(Reader, char[])
Use RussianLetterTokenizer.RussianLetterTokenizer(Reader) instead. |
org.apache.lucene.analysis.ru.RussianLowerCaseFilter(TokenStream, char[])
Use RussianLowerCaseFilter.RussianLowerCaseFilter(TokenStream) instead. |
org.apache.lucene.analysis.ru.RussianStemFilter(TokenStream, char[])
Use RussianStemFilter.RussianStemFilter(TokenStream) instead. |
org.apache.lucene.store.SimpleFSDirectory.SimpleFSIndexInput(File)
Please use ctor taking chunkSize |
org.apache.lucene.store.SimpleFSDirectory.SimpleFSIndexInput(File, int)
Please use ctor taking chunkSize |
org.apache.lucene.analysis.cn.smart.SmartChineseAnalyzer()
Use SmartChineseAnalyzer.SmartChineseAnalyzer(Version) instead |
org.apache.lucene.analysis.cn.smart.SmartChineseAnalyzer(boolean)
Use SmartChineseAnalyzer.SmartChineseAnalyzer(Version, boolean) instead |
org.apache.lucene.analysis.cn.smart.SmartChineseAnalyzer(Set)
Use SmartChineseAnalyzer.SmartChineseAnalyzer(Version, Set) instead |
org.apache.lucene.analysis.snowball.SnowballAnalyzer(String)
Use SnowballAnalyzer.SnowballAnalyzer(Version, String) instead |
org.apache.lucene.analysis.snowball.SnowballAnalyzer(String, String[])
Use SnowballAnalyzer.SnowballAnalyzer(Version, String, String[]) instead |
org.apache.lucene.search.Sort(String)
Please specify the type explicitly by
first creating a SortField and then use Sort.Sort(SortField) |
org.apache.lucene.search.Sort(String[])
Please specify the type explicitly by
first creating SortField s and then use Sort.Sort(SortField[]) |
org.apache.lucene.search.Sort(String, boolean)
Please specify the type explicitly by
first creating a SortField and then use Sort.Sort(SortField) |
org.apache.lucene.search.SortField(String)
Please specify the exact type instead. |
org.apache.lucene.search.SortField(String, boolean)
Please specify the exact type instead. |
org.apache.lucene.search.SortField(String, SortComparatorSource)
use SortField (String field, FieldComparatorSource comparator) |
org.apache.lucene.search.SortField(String, SortComparatorSource, boolean)
use SortField (String field, FieldComparatorSource comparator, boolean reverse) |
org.apache.lucene.search.SpanFilterResult(BitSet, List)
Use SpanFilterResult.SpanFilterResult(DocIdSet, List) instead |
org.apache.lucene.analysis.standard.StandardAnalyzer()
Use StandardAnalyzer.StandardAnalyzer(Version) instead. |
org.apache.lucene.analysis.standard.StandardAnalyzer(boolean)
Remove in 3.X and make true the only valid value |
org.apache.lucene.analysis.standard.StandardAnalyzer(File)
Use StandardAnalyzer.StandardAnalyzer(Version, File)
instead |
org.apache.lucene.analysis.standard.StandardAnalyzer(File, boolean)
Remove in 3.X and make true the only valid value |
org.apache.lucene.analysis.standard.StandardAnalyzer(Reader)
Use StandardAnalyzer.StandardAnalyzer(Version, Reader)
instead |
org.apache.lucene.analysis.standard.StandardAnalyzer(Reader, boolean)
Remove in 3.X and make true the only valid value |
org.apache.lucene.analysis.standard.StandardAnalyzer(Set)
Use StandardAnalyzer.StandardAnalyzer(Version, Set)
instead |
org.apache.lucene.analysis.standard.StandardAnalyzer(Set, boolean)
Remove in 3.X and make true the only valid value |
org.apache.lucene.analysis.standard.StandardAnalyzer(String[])
Use StandardAnalyzer.StandardAnalyzer(Version, Set) instead |
org.apache.lucene.analysis.standard.StandardAnalyzer(String[], boolean)
Remove in 3.X and make true the only valid value |
org.apache.lucene.analysis.standard.StandardTokenizer(AttributeSource.AttributeFactory, Reader, boolean)
Use StandardTokenizer.StandardTokenizer(Version, org.apache.lucene.util.AttributeSource.AttributeFactory, Reader) instead |
org.apache.lucene.analysis.standard.StandardTokenizer(AttributeSource, Reader, boolean)
Use StandardTokenizer.StandardTokenizer(Version, AttributeSource, Reader) instead |
org.apache.lucene.analysis.standard.StandardTokenizer(Reader)
Use StandardTokenizer.StandardTokenizer(Version,
Reader) instead |
org.apache.lucene.analysis.standard.StandardTokenizer(Reader, boolean)
Use StandardTokenizer.StandardTokenizer(Version, Reader) instead |
org.apache.lucene.analysis.StopAnalyzer()
Use StopAnalyzer.StopAnalyzer(Version) instead |
org.apache.lucene.analysis.StopAnalyzer(boolean)
Use StopAnalyzer.StopAnalyzer(Version) instead |
org.apache.lucene.analysis.StopAnalyzer(File)
Use StopAnalyzer.StopAnalyzer(Version, File) instead |
org.apache.lucene.analysis.StopAnalyzer(File, boolean)
Use StopAnalyzer.StopAnalyzer(Version, File) instead |
org.apache.lucene.analysis.StopAnalyzer(Reader)
Use StopAnalyzer.StopAnalyzer(Version, Reader) instead |
org.apache.lucene.analysis.StopAnalyzer(Reader, boolean)
Use StopAnalyzer.StopAnalyzer(Version, Reader) instead |
org.apache.lucene.analysis.StopAnalyzer(Set)
Use StopAnalyzer.StopAnalyzer(Version, Set) instead |
org.apache.lucene.analysis.StopAnalyzer(Set, boolean)
Use StopAnalyzer.StopAnalyzer(Version, Set) instead |
org.apache.lucene.analysis.StopAnalyzer(String[])
Use StopAnalyzer.StopAnalyzer(Version, Set) instead |
org.apache.lucene.analysis.StopAnalyzer(String[], boolean)
Use StopAnalyzer.StopAnalyzer(Version, Set) instead |
org.apache.lucene.analysis.StopFilter(boolean, TokenStream, String[])
Use StopFilter.StopFilter(boolean, TokenStream, Set) instead. |
org.apache.lucene.analysis.StopFilter(boolean, TokenStream, String[], boolean)
Use StopFilter.StopFilter(boolean, TokenStream, Set, boolean) instead. |
org.apache.lucene.analysis.StopFilter(TokenStream, Set)
Use StopFilter.StopFilter(boolean, TokenStream, Set) instead |
org.apache.lucene.analysis.StopFilter(TokenStream, Set, boolean)
Use StopFilter.StopFilter(boolean, TokenStream, Set, boolean) instead |
org.apache.lucene.analysis.StopFilter(TokenStream, String[])
Use StopFilter.StopFilter(boolean, TokenStream, String[]) instead |
org.apache.lucene.analysis.StopFilter(TokenStream, String[], boolean)
Use StopFilter.StopFilter(boolean, TokenStream, String[], boolean) instead |
org.apache.lucene.analysis.th.ThaiAnalyzer()
Use ThaiAnalyzer.ThaiAnalyzer(Version) instead |