|
Deprecated Classes |
org.apache.lucene.analysis.ar.ArabicLetterTokenizer
(3.1) Use StandardTokenizer instead. |
org.apache.lucene.analysis.CharArraySet.CharArraySetIterator
Use the standard iterator, which returns char[] instances. |
org.apache.lucene.analysis.cn.ChineseAnalyzer
Use StandardAnalyzer instead, which has the same functionality.
This analyzer will be removed in Lucene 5.0 |
org.apache.lucene.analysis.cn.ChineseFilter
Use StopFilter instead, which has the same functionality.
This filter will be removed in Lucene 5.0 |
org.apache.lucene.analysis.cn.ChineseTokenizer
Use StandardTokenizer instead, which has the same functionality.
This filter will be removed in Lucene 5.0 |
org.apache.lucene.document.DateField
If you build a new index, use DateTools or
NumericField instead.
This class is included for use with existing
indices and will be removed in a future release (possibly Lucene 4.0). |
org.apache.lucene.spatial.geometry.shape.DistanceApproximation
This has been replaced with more accurate
math in LLRect. This class will be removed in a future release. |
org.apache.lucene.analysis.nl.DutchStemFilter
Use SnowballFilter with
DutchStemmer instead, which has the
same functionality. This filter will be removed in Lucene 5.0 |
org.apache.lucene.analysis.nl.DutchStemmer
Use DutchStemmer instead,
which has the same functionality. This filter will be removed in Lucene 5.0 |
org.apache.lucene.search.FilterManager
used by remote package which is deprecated as well. You should
use CachingWrapperFilter if you wish to cache
Filters. |
org.apache.lucene.analysis.fr.FrenchStemFilter
Use SnowballFilter with
FrenchStemmer instead, which has the
same functionality. This filter will be removed in Lucene 5.0 |
org.apache.lucene.analysis.fr.FrenchStemmer
Use FrenchStemmer instead,
which has the same functionality. This filter will be removed in Lucene 5.0 |
org.apache.lucene.index.IndexWriter.MaxFieldLength
use LimitTokenCountAnalyzer instead. |
org.apache.lucene.analysis.ISOLatin1AccentFilter
If you build a new index, use ASCIIFoldingFilter
which covers a superset of Latin 1.
This class is included for use with existing
indexes and will be removed in a future release (possibly Lucene 4.0). |
org.apache.lucene.queryParser.standard.MultiFieldQueryParserWrapper
this class will be removed soon, it's a temporary class to be
used along the transition from the old query parser to the new
one |
org.apache.lucene.search.MultiSearcher
If you are using MultiSearcher over
IndexSearchers, please use MultiReader instead; this class
does not properly handle certain kinds of queries (see LUCENE-2756). |
org.apache.lucene.document.NumberTools
For new indexes use NumericUtils instead, which
provides a sortable binary representation (prefix encoded) of numeric
values.
To index and efficiently query numeric values use NumericField
and NumericRangeQuery.
This class is included for use with existing
indices and will be removed in a future release (possibly Lucene 4.0). |
org.apache.lucene.search.ParallelMultiSearcher
Please pass an ExecutorService to IndexSearcher, instead. |
org.apache.lucene.util.Parameter
Use Java 5 enum, will be removed in a later Lucene 3.x release. |
org.apache.lucene.queryParser.standard.QueryParserWrapper
this class will be removed soon, it's a temporary class to be
used along the transition from the old query parser to the new
one |
org.apache.lucene.search.RemoteCachingWrapperFilter
This package (all of contrib/remote) will be
removed in 4.0. |
org.apache.lucene.search.RemoteSearchable
This package (all of contrib/remote) will be
removed in 4.0. |
org.apache.lucene.analysis.ru.RussianLetterTokenizer
Use StandardTokenizer instead, which has the same functionality.
This filter will be removed in Lucene 5.0 |
org.apache.lucene.analysis.ru.RussianLowerCaseFilter
Use LowerCaseFilter instead, which has the same
functionality. This filter will be removed in Lucene 4.0 |
org.apache.lucene.analysis.ru.RussianStemFilter
Use SnowballFilter with
RussianStemmer instead, which has the
same functionality. This filter will be removed in Lucene 4.0 |
org.apache.lucene.search.Searcher
In 4.0 this abstract class is removed/absorbed
into IndexSearcher |
org.apache.lucene.analysis.shingle.ShingleMatrixFilter
Will be removed in Lucene 4.0. This filter is unmaintained and might not behave
correctly if used with custom Attributes, i.e. Attributes other than
the ones located in org.apache.lucene.analysis.tokenattributes. It also uses
hardcoded payload encoders which makes it not easily adaptable to other use-cases. |
org.apache.lucene.search.SimilarityDelegator
this class will be removed in 4.0. Please
subclass Similarity or DefaultSimilarity instead. |
org.apache.lucene.spatial.tier.projections.SinusoidalProjector
Until we can put in place proper tests and a proper fix. |
org.apache.lucene.analysis.snowball.SnowballAnalyzer
Use the language-specific analyzer in contrib/analyzers instead.
This analyzer will be removed in Lucene 5.0 |
org.apache.lucene.search.regex.SpanRegexQuery
Use new SpanMultiTermQueryWrapper<RegexQuery>(new RegexQuery()) instead.
This query will be removed in Lucene 4.0 |
org.apache.lucene.analysis.tokenattributes.TermAttributeImpl
This class is not used anymore. The backwards layer in
AttributeFactory uses the replacement implementation. |
|
Deprecated Methods |
org.apache.lucene.index.IndexWriter.addIndexesNoOptimize(Directory...)
use IndexWriter.addIndexes(Directory...) instead |
org.apache.lucene.queryParser.core.processors.QueryNodeProcessorPipeline.addProcessor(QueryNodeProcessor)
this class now conforms to List interface, so use
QueryNodeProcessorPipeline.add(QueryNodeProcessor) instead |
org.apache.lucene.queryParser.core.nodes.QueryNode.containsTag(CharSequence)
use QueryNode.containsTag(String) instead |
org.apache.lucene.queryParser.core.nodes.QueryNodeImpl.containsTag(CharSequence)
use QueryNodeImpl.containsTag(String) instead |
org.apache.lucene.store.Directory.copy(Directory, Directory, boolean)
should be replaced with calls to
Directory.copy(Directory, String, String) for every file that
needs copying. You can use the following code:
IndexFileNameFilter filter = IndexFileNameFilter.getFilter();
for (String file : src.listAll()) {
if (filter.accept(null, file)) {
src.copy(dest, file, file);
}
}
|
org.apache.lucene.analysis.CharArraySet.copy(Set>)
use CharArraySet.copy(Version, Set) instead. |
org.apache.lucene.util.IndexableBinaryStringTools.decode(CharBuffer)
Use IndexableBinaryStringTools.decode(char[], int, int, byte[], int, int)
instead. This method will be removed in Lucene 4.0 |
org.apache.lucene.util.IndexableBinaryStringTools.decode(CharBuffer, ByteBuffer)
Use IndexableBinaryStringTools.decode(char[], int, int, byte[], int, int)
instead. This method will be removed in Lucene 4.0 |
org.apache.lucene.search.Similarity.decodeNorm(byte)
Use Similarity.decodeNormValue(byte) instead. |
org.apache.lucene.util.IndexableBinaryStringTools.encode(ByteBuffer)
Use IndexableBinaryStringTools.encode(byte[], int, int, char[], int, int)
instead. This method will be removed in Lucene 4.0 |
org.apache.lucene.util.IndexableBinaryStringTools.encode(ByteBuffer, CharBuffer)
Use IndexableBinaryStringTools.encode(byte[], int, int, char[], int, int)
instead. This method will be removed in Lucene 4.0 |
org.apache.lucene.search.Similarity.encodeNorm(float)
Use Similarity.encodeNormValue(float) instead. |
org.tartarus.snowball.SnowballProgram.eq_s_b(int, String)
for binary back compat. Will be removed in Lucene 4.0 |
org.tartarus.snowball.SnowballProgram.eq_s(int, String)
for binary back compat. Will be removed in Lucene 4.0 |
org.tartarus.snowball.SnowballProgram.eq_v_b(StringBuilder)
for binary back compat. Will be removed in Lucene 4.0 |
org.tartarus.snowball.SnowballProgram.eq_v(StringBuilder)
for binary back compat. Will be removed in Lucene 4.0 |
org.apache.lucene.search.ChainedFilter.finalResult(OpenBitSetDISI, int)
Either use CachingWrapperFilter, or
switch to a different DocIdSet implementation yourself.
This method will be removed in Lucene 4.0 |
org.apache.lucene.search.BooleanFilter.finalResult(OpenBitSetDISI, int)
Either use CachingWrapperFilter, or
switch to a different DocIdSet implementation yourself.
This method will be removed in Lucene 4.0 |
org.apache.lucene.queryParser.CharStream.getColumn()
|
org.apache.lucene.benchmark.byTask.feeds.demohtml.SimpleCharStream.getColumn()
|
org.apache.lucene.queryParser.standard.parser.JavaCharStream.getColumn()
|
org.apache.lucene.queryParser.surround.parser.CharStream.getColumn()
|
org.apache.lucene.util.IndexableBinaryStringTools.getDecodedLength(CharBuffer)
Use IndexableBinaryStringTools.getDecodedLength(char[], int, int) instead. This
method will be removed in Lucene 4.0 |
org.apache.lucene.index.IndexWriter.getDefaultWriteLockTimeout()
use IndexWriterConfig.getDefaultWriteLockTimeout() instead |
org.apache.lucene.analysis.StopFilter.getEnablePositionIncrementsVersionDefault(Version)
use StopFilter.StopFilter(Version, TokenStream, Set) instead |
org.apache.lucene.util.IndexableBinaryStringTools.getEncodedLength(ByteBuffer)
Use IndexableBinaryStringTools.getEncodedLength(byte[], int, int) instead. This
method will be removed in Lucene 4.0 |
org.apache.lucene.document.Document.getField(String)
use Document.getFieldable(java.lang.String) instead and cast depending on
data type. |
org.apache.lucene.queryParser.core.config.QueryConfigHandler.getFieldConfig(CharSequence)
use QueryConfigHandler.getFieldConfig(String) instead |
org.apache.lucene.queryParser.core.config.FieldConfig.getFieldName()
use FieldConfig.getField() instead |
org.apache.lucene.queryParser.QueryParser.getFieldQuery(String, String)
Use QueryParser.getFieldQuery(String,String,boolean) instead. |
org.apache.lucene.queryParser.standard.QueryParserWrapper.getFieldQuery(String, String)
Use QueryParserWrapper.getFieldQuery(String, String, boolean) instead |
org.apache.lucene.document.Document.getFields(String)
use Document.getFieldable(java.lang.String) instead and cast depending on
data type. |
org.apache.lucene.search.vectorhighlight.BaseFragmentsBuilder.getFieldValues(IndexReader, int, String)
|
org.apache.lucene.store.FSDirectory.getFile()
Use FSDirectory.getDirectory() instead. |
org.apache.lucene.search.vectorhighlight.BaseFragmentsBuilder.getFragmentSource(StringBuilder, int[], String[], int, int)
|
org.apache.lucene.analysis.compound.HyphenationCompoundWordTokenFilter.getHyphenationTree(Reader)
Don't use Readers with fixed charset to load XML files, unless programatically created.
Use HyphenationCompoundWordTokenFilter.getHyphenationTree(InputSource) instead, where you can supply default charset and input
stream, if you like. |
org.apache.lucene.queryParser.CharStream.getLine()
|
org.apache.lucene.benchmark.byTask.feeds.demohtml.SimpleCharStream.getLine()
|
org.apache.lucene.queryParser.standard.parser.JavaCharStream.getLine()
|
org.apache.lucene.queryParser.surround.parser.CharStream.getLine()
|
org.apache.lucene.index.IndexWriter.getMaxBufferedDeleteTerms()
use IndexWriterConfig.getMaxBufferedDeleteTerms() instead |
org.apache.lucene.index.IndexWriter.getMaxBufferedDocs()
use IndexWriterConfig.getMaxBufferedDocs() instead. |
org.apache.lucene.index.IndexWriter.getMaxFieldLength()
use LimitTokenCountAnalyzer to limit number of tokens. |
org.apache.lucene.index.IndexWriter.getMaxMergeDocs()
use LogMergePolicy.getMaxMergeDocs() directly. |
org.apache.lucene.index.IndexWriter.getMergedSegmentWarmer()
use IndexWriterConfig.getMergedSegmentWarmer() instead. |
org.apache.lucene.index.IndexWriter.getMergeFactor()
use LogMergePolicy.getMergeFactor() directly. |
org.apache.lucene.index.IndexWriter.getMergePolicy()
use IndexWriterConfig.getMergePolicy() instead |
org.apache.lucene.index.IndexWriter.getMergeScheduler()
use IndexWriterConfig.getMergeScheduler() instead |
org.apache.lucene.search.Similarity.getNormDecoder()
Use instance methods for encoding/decoding norm values to enable customization. |
org.apache.lucene.index.IndexWriter.getRAMBufferSizeMB()
use IndexWriterConfig.getRAMBufferSizeMB() instead. |
org.apache.lucene.index.IndexWriter.getReader()
Please use IndexReader.open(IndexWriter,boolean) instead. |
org.apache.lucene.index.IndexWriter.getReader(int)
Please use IndexReader.open(IndexWriter,boolean) instead. Furthermore,
this method cannot guarantee the reader (and its
sub-readers) will be opened with the
termInfosIndexDivisor setting because some of them may
have already been opened according to IndexWriterConfig.setReaderTermsIndexDivisor(int). You
should set the requested termInfosIndexDivisor through
IndexWriterConfig.setReaderTermsIndexDivisor(int) and use
IndexWriter.getReader(). |
org.apache.lucene.index.IndexWriter.getReaderTermsIndexDivisor()
use IndexWriterConfig.getReaderTermsIndexDivisor() instead. |
org.apache.lucene.index.IndexWriter.getSimilarity()
use IndexWriterConfig.getSimilarity() instead |
org.apache.lucene.search.Scorer.getSimilarity()
Store any Similarity you might need privately in your implementation instead. |
org.apache.lucene.search.Query.getSimilarity(Searcher)
Instead of using "runtime" subclassing/delegation, subclass the Weight instead. |
org.apache.lucene.queryParser.core.nodes.QueryNode.getTag(CharSequence)
use QueryNode.getTag(String) instead |
org.apache.lucene.queryParser.core.nodes.QueryNodeImpl.getTag(CharSequence)
use QueryNodeImpl.getTag(String) instead |
org.apache.lucene.queryParser.core.nodes.QueryNode.getTags()
use QueryNode.getTagMap() |
org.apache.lucene.queryParser.core.nodes.QueryNodeImpl.getTags()
use QueryNodeImpl.getTagMap() instead |
org.apache.lucene.index.IndexWriter.getTermIndexInterval()
use IndexWriterConfig.getTermIndexInterval() |
org.apache.lucene.index.IndexWriter.getUseCompoundFile()
use LogMergePolicy.getUseCompoundFile() |
org.apache.lucene.index.IndexWriter.getWriteLockTimeout()
use IndexWriterConfig.getWriteLockTimeout() |
org.tartarus.snowball.SnowballProgram.insert(int, int, String)
for binary back compat. Will be removed in Lucene 4.0 |
org.tartarus.snowball.SnowballProgram.insert(int, int, StringBuilder)
for binary back compat. Will be removed in Lucene 4.0 |
org.apache.lucene.analysis.standard.ClassicTokenizer.isReplaceInvalidAcronym()
Remove in 3.X and make true the only valid value |
org.apache.lucene.analysis.standard.StandardTokenizer.isReplaceInvalidAcronym()
Remove in 3.X and make true the only valid value |
org.apache.lucene.analysis.CharTokenizer.isTokenChar(char)
use CharTokenizer.isTokenChar(int) instead. This method will be
removed in Lucene 4.0. |
org.apache.lucene.search.Similarity.lengthNorm(String, int)
Please override computeNorm instead |
org.apache.lucene.analysis.cz.CzechAnalyzer.loadStopWords(InputStream, String)
use WordlistLoader.getWordSet(Reader, String)
and CzechAnalyzer.CzechAnalyzer(Version, Set) instead |
org.apache.lucene.search.vectorhighlight.BaseFragmentsBuilder.makeFragment(StringBuilder, int[], String[], FieldFragList.WeightedFragInfo)
|
org.apache.lucene.analysis.StopFilter.makeStopSet(List>)
use StopFilter.makeStopSet(Version, List) instead |
org.apache.lucene.analysis.StopFilter.makeStopSet(List>, boolean)
use StopFilter.makeStopSet(Version, List, boolean) instead |
org.apache.lucene.analysis.StopFilter.makeStopSet(String...)
use StopFilter.makeStopSet(Version, String...) instead |
org.apache.lucene.analysis.StopFilter.makeStopSet(String[], boolean)
use StopFilter.makeStopSet(Version, String[], boolean) instead; |
org.apache.lucene.analysis.CharTokenizer.normalize(char)
use CharTokenizer.normalize(int) instead. This method will be
removed in Lucene 4.0. |
org.apache.lucene.index.SegmentInfos.range(int, int)
use asList().subList(first, last)
instead. |
org.apache.lucene.store.IndexInput.readChars(char[], int, int)
-- please use readString or readBytes
instead, and construct the string
from those utf8 bytes |
org.tartarus.snowball.SnowballProgram.replace_s(int, int, String)
for binary back compat. Will be removed in Lucene 4.0 |
org.apache.lucene.analysis.tokenattributes.CharTermAttributeImpl.resizeTermBuffer(int)
|
org.apache.lucene.analysis.reverse.ReverseStringFilter.reverse(char[])
use ReverseStringFilter.reverse(Version, char[]) instead. This
method will be removed in Lucene 4.0 |
org.apache.lucene.analysis.reverse.ReverseStringFilter.reverse(char[], int)
use ReverseStringFilter.reverse(Version, char[], int) instead. This
method will be removed in Lucene 4.0 |
org.apache.lucene.analysis.reverse.ReverseStringFilter.reverse(char[], int, int)
use ReverseStringFilter.reverse(Version, char[], int, int) instead. This
method will be removed in Lucene 4.0 |
org.apache.lucene.analysis.reverse.ReverseStringFilter.reverse(String)
use ReverseStringFilter.reverse(Version, String) instead. This method
will be removed in Lucene 4.0 |
org.apache.lucene.analysis.fr.ElisionFilter.setArticles(Set>)
use ElisionFilter.setArticles(Version, Set) instead |
org.apache.lucene.analysis.fr.ElisionFilter.setArticles(Version, Set>)
use ElisionFilter.ElisionFilter(Version, TokenStream, Set) instead |
org.apache.lucene.queryParser.core.builders.QueryTreeBuilder.setBuilder(CharSequence, QueryBuilder)
use QueryTreeBuilder.setBuilder(String, QueryBuilder) instead |
org.apache.lucene.index.IndexWriter.setDefaultWriteLockTimeout(long)
use IndexWriterConfig.setDefaultWriteLockTimeout(long) instead |
org.apache.lucene.analysis.de.GermanStemFilter.setExclusionSet(Set>)
use KeywordAttribute with KeywordMarkerFilter instead. |
org.apache.lucene.analysis.nl.DutchStemFilter.setExclusionTable(HashSet>)
use KeywordAttribute with KeywordMarkerFilter instead. |
org.apache.lucene.analysis.fr.FrenchStemFilter.setExclusionTable(Map, ?>)
use KeywordAttribute with KeywordMarkerFilter instead. |
org.apache.lucene.index.IndexWriter.setMaxBufferedDeleteTerms(int)
use IndexWriterConfig.setMaxBufferedDeleteTerms(int) instead. |
org.apache.lucene.index.IndexWriter.setMaxBufferedDocs(int)
use IndexWriterConfig.setMaxBufferedDocs(int) instead. |
org.apache.lucene.index.IndexWriter.setMaxFieldLength(int)
use LimitTokenCountAnalyzer instead. Note that the
behvaior slightly changed - the analyzer limits the number of
tokens per token stream created, while this setting limits the
total number of tokens to index. This only matters if you index
many multi-valued fields though. |
org.apache.lucene.index.IndexWriter.setMaxMergeDocs(int)
use LogMergePolicy.setMaxMergeDocs(int) directly. |
org.apache.lucene.index.IndexWriter.setMergedSegmentWarmer(IndexWriter.IndexReaderWarmer)
use
IndexWriterConfig.setMergedSegmentWarmer(org.apache.lucene.index.IndexWriter.IndexReaderWarmer)
instead. |
org.apache.lucene.index.IndexWriter.setMergeFactor(int)
use LogMergePolicy.setMergeFactor(int) directly. |
org.apache.lucene.index.IndexWriter.setMergePolicy(MergePolicy)
use IndexWriterConfig.setMergePolicy(MergePolicy) instead. |
org.apache.lucene.index.IndexWriter.setMergeScheduler(MergeScheduler)
use IndexWriterConfig.setMergeScheduler(MergeScheduler) instead |
org.apache.lucene.index.IndexReader.setNorm(int, String, float)
Use IndexReader.setNorm(int, String, byte) instead, encoding the
float to byte with your Similarity's Similarity.encodeNormValue(float).
This method will be removed in Lucene 4.0 |
org.apache.lucene.index.IndexWriter.setRAMBufferSizeMB(double)
use IndexWriterConfig.setRAMBufferSizeMB(double) instead. |
org.apache.lucene.index.IndexWriter.setReaderTermsIndexDivisor(int)
use IndexWriterConfig.setReaderTermsIndexDivisor(int) instead. |
org.apache.lucene.analysis.standard.ClassicTokenizer.setReplaceInvalidAcronym(boolean)
Remove in 3.X and make true the only valid value
See https://issues.apache.org/jira/browse/LUCENE-1068 |
org.apache.lucene.analysis.standard.StandardTokenizer.setReplaceInvalidAcronym(boolean)
Remove in 3.X and make true the only valid value
See https://issues.apache.org/jira/browse/LUCENE-1068 |
org.apache.lucene.index.IndexWriter.setSimilarity(Similarity)
use IndexWriterConfig.setSimilarity(Similarity) instead |
org.apache.lucene.analysis.br.BrazilianAnalyzer.setStemExclusionTable(File)
use BrazilianAnalyzer.BrazilianAnalyzer(Version, Set, Set) instead |
org.apache.lucene.analysis.de.GermanAnalyzer.setStemExclusionTable(File)
use GermanAnalyzer.GermanAnalyzer(Version, Set, Set) instead |
org.apache.lucene.analysis.fr.FrenchAnalyzer.setStemExclusionTable(File)
use FrenchAnalyzer.FrenchAnalyzer(Version, Set, Set) instead |
org.apache.lucene.analysis.nl.DutchAnalyzer.setStemExclusionTable(File)
use DutchAnalyzer.DutchAnalyzer(Version, Set, Set) instead |
org.apache.lucene.analysis.nl.DutchAnalyzer.setStemExclusionTable(HashSet>)
use DutchAnalyzer.DutchAnalyzer(Version, Set, Set) instead |
org.apache.lucene.analysis.br.BrazilianAnalyzer.setStemExclusionTable(Map, ?>)
use BrazilianAnalyzer.BrazilianAnalyzer(Version, Set, Set) instead |
org.apache.lucene.analysis.de.GermanAnalyzer.setStemExclusionTable(Map, ?>)
use GermanAnalyzer.GermanAnalyzer(Version, Set, Set) instead |
org.apache.lucene.analysis.fr.FrenchAnalyzer.setStemExclusionTable(Map, ?>)
use FrenchAnalyzer.FrenchAnalyzer(Version, Set, Set) instead |
org.apache.lucene.analysis.br.BrazilianAnalyzer.setStemExclusionTable(String...)
use BrazilianAnalyzer.BrazilianAnalyzer(Version, Set, Set) instead |
org.apache.lucene.analysis.fr.FrenchAnalyzer.setStemExclusionTable(String...)
use FrenchAnalyzer.FrenchAnalyzer(Version, Set, Set) instead |
org.apache.lucene.analysis.nl.DutchAnalyzer.setStemExclusionTable(String...)
use DutchAnalyzer.DutchAnalyzer(Version, Set, Set) instead |
org.apache.lucene.analysis.de.GermanAnalyzer.setStemExclusionTable(String[])
use GermanAnalyzer.GermanAnalyzer(Version, Set, Set) instead |
org.apache.lucene.queryParser.core.nodes.QueryNode.setTag(CharSequence, Object)
use QueryNode.setTag(String, Object) instead |
org.apache.lucene.queryParser.core.nodes.QueryNodeImpl.setTag(CharSequence, Object)
use QueryNodeImpl.setTag(String, Object) instead |
org.apache.lucene.analysis.tokenattributes.CharTermAttributeImpl.setTermBuffer(char[], int, int)
|
org.apache.lucene.analysis.tokenattributes.CharTermAttributeImpl.setTermBuffer(String)
|
org.apache.lucene.analysis.tokenattributes.CharTermAttributeImpl.setTermBuffer(String, int, int)
|
org.apache.lucene.index.IndexWriter.setTermIndexInterval(int)
use IndexWriterConfig.setTermIndexInterval(int) |
org.apache.lucene.analysis.tokenattributes.CharTermAttributeImpl.setTermLength(int)
|
org.apache.lucene.index.ConcurrentMergeScheduler.setTestMode()
remove all this test mode code in lucene 3.2! |
org.apache.lucene.index.IndexWriter.setUseCompoundFile(boolean)
use LogMergePolicy.setUseCompoundFile(boolean). |
org.apache.lucene.index.IndexWriter.setWriteLockTimeout(long)
use IndexWriterConfig.setWriteLockTimeout(long) instead |
org.apache.lucene.store.IndexInput.skipChars(int)
this method operates on old "modified utf8" encoded
strings |
org.tartarus.snowball.SnowballProgram.slice_from(String)
for binary back compat. Will be removed in Lucene 4.0 |
org.tartarus.snowball.SnowballProgram.slice_from(StringBuilder)
for binary back compat. Will be removed in Lucene 4.0 |
org.apache.lucene.analysis.CharArraySet.stringIterator()
Use CharArraySet.iterator(), which returns char[] instances. |
org.apache.lucene.store.Directory.sync(String)
use Directory.sync(Collection) instead.
For easy migration you can change your code to call
sync(Collections.singleton(name)) |
org.apache.lucene.store.FSDirectory.sync(String)
|
org.apache.lucene.store.FileSwitchDirectory.sync(String)
|
org.apache.lucene.analysis.tokenattributes.CharTermAttributeImpl.term()
|
org.apache.lucene.analysis.tokenattributes.CharTermAttributeImpl.termBuffer()
|
org.apache.lucene.analysis.tokenattributes.CharTermAttributeImpl.termLength()
|
org.apache.lucene.store.NRTCachingDirectory.touchFile(String)
|
org.apache.lucene.store.Directory.touchFile(String)
Lucene never uses this API; it will be
removed in 4.0. |
org.apache.lucene.store.FSDirectory.touchFile(String)
Lucene never uses this API; it will be
removed in 4.0. |
org.apache.lucene.store.RAMDirectory.touchFile(String)
Lucene never uses this API; it will be
removed in 4.0. |
org.apache.lucene.store.FileSwitchDirectory.touchFile(String)
|
org.apache.lucene.queryParser.core.nodes.QueryNode.unsetTag(CharSequence)
use QueryNode.unsetTag(String) instead |
org.apache.lucene.queryParser.core.nodes.QueryNodeImpl.unsetTag(CharSequence)
use QueryNodeImpl.unsetTag(String) |
org.apache.lucene.store.IndexOutput.writeChars(char[], int, int)
-- please pre-convert to utf8 bytes instead or use IndexOutput.writeString(java.lang.String) |
org.apache.lucene.store.IndexOutput.writeChars(String, int, int)
-- please pre-convert to utf8 bytes
instead or use IndexOutput.writeString(java.lang.String) |
|
Deprecated Constructors |
org.apache.lucene.analysis.ar.ArabicAnalyzer(Version, File)
use ArabicAnalyzer.ArabicAnalyzer(Version, Set) instead |
org.apache.lucene.analysis.ar.ArabicAnalyzer(Version, Hashtable, ?>)
use ArabicAnalyzer.ArabicAnalyzer(Version, Set) instead |
org.apache.lucene.analysis.ar.ArabicAnalyzer(Version, String...)
use ArabicAnalyzer.ArabicAnalyzer(Version, Set) instead |
org.apache.lucene.analysis.ar.ArabicLetterTokenizer(AttributeSource.AttributeFactory, Reader)
use ArabicLetterTokenizer.ArabicLetterTokenizer(Version, AttributeSource.AttributeFactory, Reader)
instead. This will be removed in Lucene 4.0. |
org.apache.lucene.analysis.ar.ArabicLetterTokenizer(AttributeSource, Reader)
use ArabicLetterTokenizer.ArabicLetterTokenizer(Version, AttributeSource, Reader)
instead. This will be removed in Lucene 4.0. |
org.apache.lucene.analysis.ar.ArabicLetterTokenizer(Reader)
use ArabicLetterTokenizer.ArabicLetterTokenizer(Version, Reader) instead. This will
be removed in Lucene 4.0. |
org.apache.lucene.util.ArrayUtil()
This constructor was not intended to be public and should not be used.
This class contains solely a static utility methods.
It will be made private in Lucene 4.0 |
org.apache.lucene.analysis.br.BrazilianAnalyzer(Version, File)
use BrazilianAnalyzer.BrazilianAnalyzer(Version, Set) instead |
org.apache.lucene.analysis.br.BrazilianAnalyzer(Version, Map, ?>)
use BrazilianAnalyzer.BrazilianAnalyzer(Version, Set) instead |
org.apache.lucene.analysis.br.BrazilianAnalyzer(Version, String...)
use BrazilianAnalyzer.BrazilianAnalyzer(Version, Set) instead |
org.apache.lucene.analysis.br.BrazilianStemFilter(TokenStream, Set>)
use KeywordAttribute with KeywordMarkerFilter instead. |
org.apache.lucene.analysis.CharArraySet(Collection>, boolean)
use CharArraySet.CharArraySet(Version, Collection, boolean) instead |
org.apache.lucene.analysis.CharArraySet(int, boolean)
use CharArraySet.CharArraySet(Version, int, boolean) instead |
org.apache.lucene.analysis.CharTokenizer(AttributeSource.AttributeFactory, Reader)
use CharTokenizer.CharTokenizer(Version, AttributeSource.AttributeFactory, Reader) instead. This will be
removed in Lucene 4.0. |
org.apache.lucene.analysis.CharTokenizer(AttributeSource, Reader)
use CharTokenizer.CharTokenizer(Version, AttributeSource, Reader) instead. This will be
removed in Lucene 4.0. |
org.apache.lucene.analysis.CharTokenizer(Reader)
use CharTokenizer.CharTokenizer(Version, Reader) instead. This will be
removed in Lucene 4.0. |
org.apache.lucene.analysis.cjk.CJKAnalyzer(Version, String...)
use CJKAnalyzer.CJKAnalyzer(Version, Set) instead |
org.apache.lucene.analysis.compound.CompoundWordTokenFilterBase(TokenStream, Set>)
use CompoundWordTokenFilterBase.CompoundWordTokenFilterBase(Version, TokenStream, Set) instead |
org.apache.lucene.analysis.compound.CompoundWordTokenFilterBase(TokenStream, Set>, boolean)
use CompoundWordTokenFilterBase.CompoundWordTokenFilterBase(Version, TokenStream, Set, boolean) instead |
org.apache.lucene.analysis.compound.CompoundWordTokenFilterBase(TokenStream, Set>, int, int, int, boolean)
use CompoundWordTokenFilterBase.CompoundWordTokenFilterBase(Version, TokenStream, Set, int, int, int, boolean) instead |
org.apache.lucene.analysis.compound.CompoundWordTokenFilterBase(TokenStream, String[])
use CompoundWordTokenFilterBase.CompoundWordTokenFilterBase(Version, TokenStream, String[]) instead |
org.apache.lucene.analysis.compound.CompoundWordTokenFilterBase(TokenStream, String[], boolean)
use CompoundWordTokenFilterBase.CompoundWordTokenFilterBase(Version, TokenStream, String[], boolean) instead |
org.apache.lucene.analysis.compound.CompoundWordTokenFilterBase(TokenStream, String[], int, int, int, boolean)
use CompoundWordTokenFilterBase.CompoundWordTokenFilterBase(Version, TokenStream, String[], int, int, int, boolean) instead |
org.apache.lucene.analysis.cz.CzechAnalyzer(Version, File)
use CzechAnalyzer.CzechAnalyzer(Version, Set) instead |
org.apache.lucene.analysis.cz.CzechAnalyzer(Version, HashSet>)
use CzechAnalyzer.CzechAnalyzer(Version, Set) instead |
org.apache.lucene.analysis.cz.CzechAnalyzer(Version, String...)
use CzechAnalyzer.CzechAnalyzer(Version, Set) instead |
org.apache.lucene.analysis.compound.DictionaryCompoundWordTokenFilter(TokenStream, Set)
use DictionaryCompoundWordTokenFilter.DictionaryCompoundWordTokenFilter(Version, TokenStream, Set) instead |
org.apache.lucene.analysis.compound.DictionaryCompoundWordTokenFilter(TokenStream, Set, int, int, int, boolean)
use DictionaryCompoundWordTokenFilter.DictionaryCompoundWordTokenFilter(Version, TokenStream, Set, int, int, int, boolean) instead |
org.apache.lucene.analysis.compound.DictionaryCompoundWordTokenFilter(TokenStream, String[])
use DictionaryCompoundWordTokenFilter.DictionaryCompoundWordTokenFilter(Version, TokenStream, String[]) instead |
org.apache.lucene.analysis.compound.DictionaryCompoundWordTokenFilter(TokenStream, String[], int, int, int, boolean)
use DictionaryCompoundWordTokenFilter.DictionaryCompoundWordTokenFilter(Version, TokenStream, String[], int, int, int, boolean) instead |
org.apache.lucene.analysis.nl.DutchAnalyzer(Version, File)
use DutchAnalyzer.DutchAnalyzer(Version, Set) instead |
org.apache.lucene.analysis.nl.DutchAnalyzer(Version, HashSet>)
use DutchAnalyzer.DutchAnalyzer(Version, Set) instead |
org.apache.lucene.analysis.nl.DutchAnalyzer(Version, String...)
use DutchAnalyzer.DutchAnalyzer(Version, Set) instead |
org.apache.lucene.analysis.nl.DutchStemFilter(TokenStream, Set>)
use KeywordAttribute with KeywordMarkerFilter instead. |
org.apache.lucene.analysis.nl.DutchStemFilter(TokenStream, Set>, Map, ?>)
use KeywordAttribute with KeywordMarkerFilter instead. |
org.apache.lucene.analysis.fr.ElisionFilter(TokenStream)
use ElisionFilter.ElisionFilter(Version, TokenStream) instead |
org.apache.lucene.analysis.fr.ElisionFilter(TokenStream, Set>)
use ElisionFilter.ElisionFilter(Version, TokenStream, Set) instead |
org.apache.lucene.analysis.fr.ElisionFilter(TokenStream, String[])
use ElisionFilter.ElisionFilter(Version, TokenStream, Set) instead |
org.apache.lucene.document.Field(String, byte[], Field.Store)
Use instead |
org.apache.lucene.document.Field(String, byte[], int, int, Field.Store)
Use instead |
org.apache.lucene.queryParser.core.config.FieldConfig(CharSequence)
use FieldConfig.FieldConfig(String) instead |
org.apache.lucene.analysis.fr.FrenchAnalyzer(Version, File)
use FrenchAnalyzer.FrenchAnalyzer(Version, Set) instead |
org.apache.lucene.analysis.fr.FrenchAnalyzer(Version, String...)
use FrenchAnalyzer.FrenchAnalyzer(Version, Set) instead |
org.apache.lucene.analysis.fr.FrenchStemFilter(TokenStream, Set>)
use KeywordAttribute with KeywordMarkerFilter instead. |
org.apache.lucene.analysis.de.GermanAnalyzer(Version, File)
use GermanAnalyzer.GermanAnalyzer(Version, Set) |
org.apache.lucene.analysis.de.GermanAnalyzer(Version, Map, ?>)
use GermanAnalyzer.GermanAnalyzer(Version, Set) |
org.apache.lucene.analysis.de.GermanAnalyzer(Version, String...)
use GermanAnalyzer.GermanAnalyzer(Version, Set) |
org.apache.lucene.analysis.de.GermanStemFilter(TokenStream, Set>)
use KeywordAttribute with KeywordMarkerFilter instead. |
org.apache.lucene.analysis.el.GreekAnalyzer(Version, Map, ?>)
use GreekAnalyzer.GreekAnalyzer(Version, Set) instead |
org.apache.lucene.analysis.el.GreekAnalyzer(Version, String...)
use GreekAnalyzer.GreekAnalyzer(Version, Set) instead |
org.apache.lucene.analysis.el.GreekLowerCaseFilter(TokenStream)
Use GreekLowerCaseFilter.GreekLowerCaseFilter(Version, TokenStream) instead. |
org.apache.lucene.analysis.compound.HyphenationCompoundWordTokenFilter(TokenStream, HyphenationTree, Set>)
use HyphenationCompoundWordTokenFilter.HyphenationCompoundWordTokenFilter(Version, TokenStream, HyphenationTree, Set) instead. |
org.apache.lucene.analysis.compound.HyphenationCompoundWordTokenFilter(TokenStream, HyphenationTree, Set>, int, int, int, boolean)
use HyphenationCompoundWordTokenFilter.HyphenationCompoundWordTokenFilter(Version, TokenStream, HyphenationTree, Set, int, int, int, boolean) instead. |
org.apache.lucene.analysis.compound.HyphenationCompoundWordTokenFilter(TokenStream, HyphenationTree, String[])
use HyphenationCompoundWordTokenFilter.HyphenationCompoundWordTokenFilter(Version, TokenStream, HyphenationTree, String[]) instead. |
org.apache.lucene.analysis.compound.HyphenationCompoundWordTokenFilter(TokenStream, HyphenationTree, String[], int, int, int, boolean)
use HyphenationCompoundWordTokenFilter.HyphenationCompoundWordTokenFilter(Version, TokenStream, HyphenationTree, String[], int, int, int, boolean) instead. |
org.apache.lucene.index.IndexWriter(Directory, Analyzer, boolean, IndexDeletionPolicy, IndexWriter.MaxFieldLength)
use IndexWriter.IndexWriter(Directory, IndexWriterConfig) instead |
org.apache.lucene.index.IndexWriter(Directory, Analyzer, boolean, IndexWriter.MaxFieldLength)
use IndexWriter.IndexWriter(Directory, IndexWriterConfig) instead |
org.apache.lucene.index.IndexWriter(Directory, Analyzer, IndexDeletionPolicy, IndexWriter.MaxFieldLength)
use IndexWriter.IndexWriter(Directory, IndexWriterConfig) instead |
org.apache.lucene.index.IndexWriter(Directory, Analyzer, IndexDeletionPolicy, IndexWriter.MaxFieldLength, IndexCommit)
use IndexWriter.IndexWriter(Directory, IndexWriterConfig) instead |
org.apache.lucene.index.IndexWriter(Directory, Analyzer, IndexWriter.MaxFieldLength)
use IndexWriter.IndexWriter(Directory, IndexWriterConfig) instead |
org.apache.lucene.analysis.LengthFilter(TokenStream, int, int)
Use LengthFilter.LengthFilter(boolean, TokenStream, int, int) instead. |
org.apache.lucene.analysis.LetterTokenizer(AttributeSource.AttributeFactory, Reader)
use LetterTokenizer.LetterTokenizer(Version, AttributeSource.AttributeFactory, Reader)
instead. This will be removed in Lucene 4.0. |
org.apache.lucene.analysis.LetterTokenizer(AttributeSource, Reader)
use LetterTokenizer.LetterTokenizer(Version, AttributeSource, Reader) instead.
This will be removed in Lucene 4.0. |
org.apache.lucene.analysis.LetterTokenizer(Reader)
use LetterTokenizer.LetterTokenizer(Version, Reader) instead. This
will be removed in Lucene 4.0. |
org.apache.lucene.analysis.LowerCaseFilter(TokenStream)
Use LowerCaseFilter.LowerCaseFilter(Version, TokenStream) instead. |
org.apache.lucene.analysis.LowerCaseTokenizer(AttributeSource.AttributeFactory, Reader)
use LowerCaseTokenizer.LowerCaseTokenizer(AttributeSource.AttributeFactory, Reader)
instead. This will be removed in Lucene 4.0. |
org.apache.lucene.analysis.LowerCaseTokenizer(AttributeSource, Reader)
use LowerCaseTokenizer.LowerCaseTokenizer(AttributeSource, Reader)
instead. This will be removed in Lucene 4.0. |
org.apache.lucene.analysis.LowerCaseTokenizer(Reader)
use LowerCaseTokenizer.LowerCaseTokenizer(Reader) instead. This will be
removed in Lucene 4.0. |
org.apache.lucene.store.NoLockFactory()
This constructor was not intended to be public and should not be used.
It will be made private in Lucene 4.0 |
org.apache.lucene.analysis.fa.PersianAnalyzer(Version, File)
use PersianAnalyzer.PersianAnalyzer(Version, Set) instead |
org.apache.lucene.analysis.fa.PersianAnalyzer(Version, Hashtable, ?>)
use PersianAnalyzer.PersianAnalyzer(Version, Set) instead |
org.apache.lucene.analysis.fa.PersianAnalyzer(Version, String...)
use PersianAnalyzer.PersianAnalyzer(Version, Set) instead |
org.apache.lucene.analysis.reverse.ReverseStringFilter(TokenStream)
use ReverseStringFilter.ReverseStringFilter(Version, TokenStream)
instead. This constructor will be removed in Lucene 4.0 |
org.apache.lucene.analysis.reverse.ReverseStringFilter(TokenStream, char)
use ReverseStringFilter.ReverseStringFilter(Version, TokenStream, char)
instead. This constructor will be removed in Lucene 4.0 |
org.apache.lucene.analysis.ru.RussianAnalyzer(Version, Map, ?>)
use RussianAnalyzer.RussianAnalyzer(Version, Set) instead |
org.apache.lucene.analysis.ru.RussianAnalyzer(Version, String...)
use RussianAnalyzer.RussianAnalyzer(Version, Set) instead |
org.apache.lucene.analysis.ru.RussianLetterTokenizer(AttributeSource.AttributeFactory, Reader)
use RussianLetterTokenizer.RussianLetterTokenizer(Version, AttributeSource.AttributeFactory, Reader)
instead. This will be removed in Lucene 4.0. |
org.apache.lucene.analysis.ru.RussianLetterTokenizer(AttributeSource, Reader)
use RussianLetterTokenizer.RussianLetterTokenizer(Version, AttributeSource, Reader)
instead. This will be removed in Lucene 4.0. |
org.apache.lucene.analysis.ru.RussianLetterTokenizer(Reader)
use RussianLetterTokenizer.RussianLetterTokenizer(Version, Reader) instead. This will
be removed in Lucene 4.0. |
org.apache.lucene.search.Scorer(Similarity)
Use Scorer.Scorer(Weight) instead. |
org.apache.lucene.search.Scorer(Similarity, Weight)
Use Scorer.Scorer(Weight) instead. |
org.apache.lucene.analysis.SimpleAnalyzer()
use SimpleAnalyzer.SimpleAnalyzer(Version) instead |
org.apache.lucene.analysis.snowball.SnowballAnalyzer(Version, String, String[])
Use SnowballAnalyzer.SnowballAnalyzer(Version, String, Set) instead. |
org.apache.lucene.analysis.standard.StandardFilter(TokenStream)
Use StandardFilter.StandardFilter(Version, TokenStream) instead. |
org.apache.lucene.analysis.StopFilter(boolean, TokenStream, Set>)
use StopFilter.StopFilter(Version, TokenStream, Set) instead |
org.apache.lucene.analysis.StopFilter(boolean, TokenStream, Set>, boolean)
use StopFilter.StopFilter(Version, TokenStream, Set, boolean) instead |
org.apache.lucene.search.highlight.TextFragment(StringBuffer, int, int)
Use TextFragment.TextFragment(CharSequence, int, int) instead.
This constructor will be removed in Lucene 4.0 |
org.apache.lucene.analysis.th.ThaiWordFilter(TokenStream)
Use the ctor with matchVersion instead! |
org.apache.lucene.analysis.WhitespaceAnalyzer()
use WhitespaceAnalyzer.WhitespaceAnalyzer(Version) instead |
org.apache.lucene.analysis.WhitespaceTokenizer(AttributeSource.AttributeFactory, Reader)
use WhitespaceTokenizer.WhitespaceTokenizer(Version, AttributeSource.AttributeFactory, Reader)
instead. This will be removed in Lucene 4.0. |
org.apache.lucene.analysis.WhitespaceTokenizer(AttributeSource, Reader)
use WhitespaceTokenizer.WhitespaceTokenizer(Version, AttributeSource, Reader)
instead. This will be removed in Lucene 4.0. |
org.apache.lucene.analysis.WhitespaceTokenizer(Reader)
use WhitespaceTokenizer.WhitespaceTokenizer(Version, Reader) instead. This will
be removed in Lucene 4.0. |