|
Deprecated Classes |
org.apache.lucene.analysis.ar.ArabicLetterTokenizer
(3.1) Use StandardTokenizer instead. |
org.apache.lucene.analysis.cn.ChineseAnalyzer
Use StandardAnalyzer instead, which has the same functionality.
This analyzer will be removed in Lucene 5.0 |
org.apache.lucene.analysis.cn.ChineseFilter
Use StopFilter instead, which has the same functionality.
This filter will be removed in Lucene 5.0 |
org.apache.lucene.analysis.cn.ChineseTokenizer
Use StandardTokenizer instead, which has the same functionality.
This filter will be removed in Lucene 5.0 |
org.apache.lucene.analysis.nl.DutchStemFilter
Use SnowballFilter with
DutchStemmer instead, which has the
same functionality. This filter will be removed in Lucene 5.0 |
org.apache.lucene.analysis.nl.DutchStemmer
Use DutchStemmer instead,
which has the same functionality. This filter will be removed in Lucene 5.0 |
org.apache.lucene.analysis.fr.FrenchStemFilter
Use SnowballFilter with
FrenchStemmer instead, which has the
same functionality. This filter will be removed in Lucene 5.0 |
org.apache.lucene.analysis.fr.FrenchStemmer
Use FrenchStemmer instead,
which has the same functionality. This filter will be removed in Lucene 5.0 |
org.apache.lucene.analysis.ru.RussianLetterTokenizer
Use StandardTokenizer instead, which has the same functionality.
This filter will be removed in Lucene 5.0 |
org.apache.lucene.analysis.ru.RussianLowerCaseFilter
Use LowerCaseFilter instead, which has the same
functionality. This filter will be removed in Lucene 4.0 |
org.apache.lucene.analysis.ru.RussianStemFilter
Use SnowballFilter with
RussianStemmer instead, which has the
same functionality. This filter will be removed in Lucene 4.0 |
org.apache.lucene.analysis.shingle.ShingleMatrixFilter
Will be removed in Lucene 4.0. This filter is unmaintained and might not behave
correctly if used with custom Attributes, i.e. Attributes other than
the ones located in org.apache.lucene.analysis.tokenattributes. It also uses
hardcoded payload encoders which makes it not easily adaptable to other use-cases. |
org.apache.lucene.analysis.snowball.SnowballAnalyzer
Use the language-specific analyzer in contrib/analyzers instead.
This analyzer will be removed in Lucene 5.0 |
|
Deprecated Methods |
org.apache.lucene.analysis.query.QueryAutoStopWordAnalyzer.addStopWords(IndexReader)
Stopwords should be calculated at instantiation using
QueryAutoStopWordAnalyzer.QueryAutoStopWordAnalyzer(Version, Analyzer, IndexReader) |
org.apache.lucene.analysis.query.QueryAutoStopWordAnalyzer.addStopWords(IndexReader, float)
Stowords should be calculated at instantiation using
QueryAutoStopWordAnalyzer.QueryAutoStopWordAnalyzer(Version, Analyzer, IndexReader, float) |
org.apache.lucene.analysis.query.QueryAutoStopWordAnalyzer.addStopWords(IndexReader, int)
Stopwords should be calculated at instantiation using
QueryAutoStopWordAnalyzer.QueryAutoStopWordAnalyzer(Version, Analyzer, IndexReader, int) |
org.apache.lucene.analysis.query.QueryAutoStopWordAnalyzer.addStopWords(IndexReader, String, float)
Stowords should be calculated at instantiation using
QueryAutoStopWordAnalyzer.QueryAutoStopWordAnalyzer(Version, Analyzer, IndexReader, Collection, float) |
org.apache.lucene.analysis.query.QueryAutoStopWordAnalyzer.addStopWords(IndexReader, String, int)
Stowords should be calculated at instantiation using
QueryAutoStopWordAnalyzer.QueryAutoStopWordAnalyzer(Version, Analyzer, IndexReader, Collection, int) |
org.tartarus.snowball.SnowballProgram.eq_s_b(int, String)
for binary back compat. Will be removed in Lucene 4.0 |
org.tartarus.snowball.SnowballProgram.eq_s(int, String)
for binary back compat. Will be removed in Lucene 4.0 |
org.tartarus.snowball.SnowballProgram.eq_v_b(StringBuilder)
for binary back compat. Will be removed in Lucene 4.0 |
org.tartarus.snowball.SnowballProgram.eq_v(StringBuilder)
for binary back compat. Will be removed in Lucene 4.0 |
org.apache.lucene.analysis.compound.HyphenationCompoundWordTokenFilter.getHyphenationTree(Reader)
Don't use Readers with fixed charset to load XML files, unless programatically created.
Use HyphenationCompoundWordTokenFilter.getHyphenationTree(InputSource) instead, where you can supply default charset and input
stream, if you like. |
org.tartarus.snowball.SnowballProgram.insert(int, int, String)
for binary back compat. Will be removed in Lucene 4.0 |
org.tartarus.snowball.SnowballProgram.insert(int, int, StringBuilder)
for binary back compat. Will be removed in Lucene 4.0 |
org.apache.lucene.analysis.cz.CzechAnalyzer.loadStopWords(InputStream, String)
use WordlistLoader.getWordSet(Reader, String, Version)
and CzechAnalyzer.CzechAnalyzer(Version, Set) instead |
org.apache.lucene.analysis.compound.CompoundWordTokenFilterBase.makeDictionary(Version, String[])
Only available for backwards compatibility. |
org.tartarus.snowball.SnowballProgram.replace_s(int, int, String)
for binary back compat. Will be removed in Lucene 4.0 |
org.apache.lucene.analysis.reverse.ReverseStringFilter.reverse(char[])
use ReverseStringFilter.reverse(Version, char[]) instead. This
method will be removed in Lucene 4.0 |
org.apache.lucene.analysis.reverse.ReverseStringFilter.reverse(char[], int)
use ReverseStringFilter.reverse(Version, char[], int) instead. This
method will be removed in Lucene 4.0 |
org.apache.lucene.analysis.reverse.ReverseStringFilter.reverse(char[], int, int)
use ReverseStringFilter.reverse(Version, char[], int, int) instead. This
method will be removed in Lucene 4.0 |
org.apache.lucene.analysis.reverse.ReverseStringFilter.reverse(String)
use ReverseStringFilter.reverse(Version, String) instead. This method
will be removed in Lucene 4.0 |
org.apache.lucene.analysis.fr.ElisionFilter.setArticles(Set>)
use ElisionFilter.setArticles(Version, Set) instead |
org.apache.lucene.analysis.fr.ElisionFilter.setArticles(Version, Set>)
use ElisionFilter.ElisionFilter(Version, TokenStream, Set) instead |
org.apache.lucene.analysis.de.GermanStemFilter.setExclusionSet(Set>)
use KeywordAttribute with KeywordMarkerFilter instead. |
org.apache.lucene.analysis.nl.DutchStemFilter.setExclusionTable(HashSet>)
use KeywordAttribute with KeywordMarkerFilter instead. |
org.apache.lucene.analysis.fr.FrenchStemFilter.setExclusionTable(Map, ?>)
use KeywordAttribute with KeywordMarkerFilter instead. |
org.apache.lucene.analysis.shingle.ShingleAnalyzerWrapper.setMaxShingleSize(int)
Setting maxShingleSize after Analyzer instantiation prevents reuse.
Confgure maxShingleSize during construction. |
org.apache.lucene.analysis.shingle.ShingleAnalyzerWrapper.setMinShingleSize(int)
Setting minShingleSize after Analyzer instantiation prevents reuse.
Confgure minShingleSize during construction. |
org.apache.lucene.analysis.shingle.ShingleAnalyzerWrapper.setOutputUnigrams(boolean)
Setting outputUnigrams after Analyzer instantiation prevents reuse.
Confgure outputUnigrams during construction. |
org.apache.lucene.analysis.shingle.ShingleAnalyzerWrapper.setOutputUnigramsIfNoShingles(boolean)
Setting outputUnigramsIfNoShingles after Analyzer instantiation prevents reuse.
Confgure outputUnigramsIfNoShingles during construction. |
org.apache.lucene.analysis.nl.DutchAnalyzer.setStemDictionary(File)
This prevents reuse of TokenStreams. If you wish to use a custom
stem dictionary, create your own Analyzer with StemmerOverrideFilter |
org.apache.lucene.analysis.br.BrazilianAnalyzer.setStemExclusionTable(File)
use BrazilianAnalyzer.BrazilianAnalyzer(Version, Set, Set) instead |
org.apache.lucene.analysis.de.GermanAnalyzer.setStemExclusionTable(File)
use GermanAnalyzer.GermanAnalyzer(Version, Set, Set) instead |
org.apache.lucene.analysis.fr.FrenchAnalyzer.setStemExclusionTable(File)
use FrenchAnalyzer.FrenchAnalyzer(Version, Set, Set) instead |
org.apache.lucene.analysis.nl.DutchAnalyzer.setStemExclusionTable(File)
use DutchAnalyzer.DutchAnalyzer(Version, Set, Set) instead |
org.apache.lucene.analysis.nl.DutchAnalyzer.setStemExclusionTable(HashSet>)
use DutchAnalyzer.DutchAnalyzer(Version, Set, Set) instead |
org.apache.lucene.analysis.br.BrazilianAnalyzer.setStemExclusionTable(Map, ?>)
use BrazilianAnalyzer.BrazilianAnalyzer(Version, Set, Set) instead |
org.apache.lucene.analysis.de.GermanAnalyzer.setStemExclusionTable(Map, ?>)
use GermanAnalyzer.GermanAnalyzer(Version, Set, Set) instead |
org.apache.lucene.analysis.fr.FrenchAnalyzer.setStemExclusionTable(Map, ?>)
use FrenchAnalyzer.FrenchAnalyzer(Version, Set, Set) instead |
org.apache.lucene.analysis.br.BrazilianAnalyzer.setStemExclusionTable(String...)
use BrazilianAnalyzer.BrazilianAnalyzer(Version, Set, Set) instead |
org.apache.lucene.analysis.fr.FrenchAnalyzer.setStemExclusionTable(String...)
use FrenchAnalyzer.FrenchAnalyzer(Version, Set, Set) instead |
org.apache.lucene.analysis.nl.DutchAnalyzer.setStemExclusionTable(String...)
use DutchAnalyzer.DutchAnalyzer(Version, Set, Set) instead |
org.apache.lucene.analysis.de.GermanAnalyzer.setStemExclusionTable(String[])
use GermanAnalyzer.GermanAnalyzer(Version, Set, Set) instead |
org.apache.lucene.analysis.shingle.ShingleAnalyzerWrapper.setTokenSeparator(String)
Setting tokenSeparator after Analyzer instantiation prevents reuse.
Confgure tokenSeparator during construction. |
org.tartarus.snowball.SnowballProgram.slice_from(String)
for binary back compat. Will be removed in Lucene 4.0 |
org.tartarus.snowball.SnowballProgram.slice_from(StringBuilder)
for binary back compat. Will be removed in Lucene 4.0 |
|
Deprecated Constructors |
org.apache.lucene.analysis.ar.ArabicAnalyzer(Version, File)
use ArabicAnalyzer.ArabicAnalyzer(Version, Set) instead |
org.apache.lucene.analysis.ar.ArabicAnalyzer(Version, Hashtable, ?>)
use ArabicAnalyzer.ArabicAnalyzer(Version, Set) instead |
org.apache.lucene.analysis.ar.ArabicAnalyzer(Version, String...)
use ArabicAnalyzer.ArabicAnalyzer(Version, Set) instead |
org.apache.lucene.analysis.ar.ArabicLetterTokenizer(AttributeSource.AttributeFactory, Reader)
use ArabicLetterTokenizer.ArabicLetterTokenizer(Version, AttributeSource.AttributeFactory, Reader)
instead. This will be removed in Lucene 4.0. |
org.apache.lucene.analysis.ar.ArabicLetterTokenizer(AttributeSource, Reader)
use ArabicLetterTokenizer.ArabicLetterTokenizer(Version, AttributeSource, Reader)
instead. This will be removed in Lucene 4.0. |
org.apache.lucene.analysis.ar.ArabicLetterTokenizer(Reader)
use ArabicLetterTokenizer.ArabicLetterTokenizer(Version, Reader) instead. This will
be removed in Lucene 4.0. |
org.apache.lucene.analysis.br.BrazilianAnalyzer(Version, File)
use BrazilianAnalyzer.BrazilianAnalyzer(Version, Set) instead |
org.apache.lucene.analysis.br.BrazilianAnalyzer(Version, Map, ?>)
use BrazilianAnalyzer.BrazilianAnalyzer(Version, Set) instead |
org.apache.lucene.analysis.br.BrazilianAnalyzer(Version, String...)
use BrazilianAnalyzer.BrazilianAnalyzer(Version, Set) instead |
org.apache.lucene.analysis.br.BrazilianStemFilter(TokenStream, Set>)
use KeywordAttribute with KeywordMarkerFilter instead. |
org.apache.lucene.analysis.cjk.CJKAnalyzer(Version, String...)
use CJKAnalyzer.CJKAnalyzer(Version, Set) instead |
org.apache.lucene.analysis.compound.CompoundWordTokenFilterBase(TokenStream, Set>)
use CompoundWordTokenFilterBase.CompoundWordTokenFilterBase(Version, TokenStream, Set) instead |
org.apache.lucene.analysis.compound.CompoundWordTokenFilterBase(TokenStream, Set>, boolean)
use CompoundWordTokenFilterBase.CompoundWordTokenFilterBase(Version, TokenStream, Set, boolean) instead |
org.apache.lucene.analysis.compound.CompoundWordTokenFilterBase(TokenStream, Set>, int, int, int, boolean)
use CompoundWordTokenFilterBase.CompoundWordTokenFilterBase(Version, TokenStream, Set, int, int, int, boolean) instead |
org.apache.lucene.analysis.compound.CompoundWordTokenFilterBase(TokenStream, String[])
use CompoundWordTokenFilterBase.CompoundWordTokenFilterBase(Version, TokenStream, String[]) instead |
org.apache.lucene.analysis.compound.CompoundWordTokenFilterBase(TokenStream, String[], boolean)
use CompoundWordTokenFilterBase.CompoundWordTokenFilterBase(Version, TokenStream, String[], boolean) instead |
org.apache.lucene.analysis.compound.CompoundWordTokenFilterBase(TokenStream, String[], int, int, int, boolean)
use CompoundWordTokenFilterBase.CompoundWordTokenFilterBase(Version, TokenStream, String[], int, int, int, boolean) instead |
org.apache.lucene.analysis.cz.CzechAnalyzer(Version, File)
use CzechAnalyzer.CzechAnalyzer(Version, Set) instead |
org.apache.lucene.analysis.cz.CzechAnalyzer(Version, HashSet>)
use CzechAnalyzer.CzechAnalyzer(Version, Set) instead |
org.apache.lucene.analysis.cz.CzechAnalyzer(Version, String...)
use CzechAnalyzer.CzechAnalyzer(Version, Set) instead |
org.apache.lucene.analysis.compound.DictionaryCompoundWordTokenFilter(TokenStream, Set)
use DictionaryCompoundWordTokenFilter.DictionaryCompoundWordTokenFilter(Version, TokenStream, Set) instead |
org.apache.lucene.analysis.compound.DictionaryCompoundWordTokenFilter(TokenStream, Set, int, int, int, boolean)
use DictionaryCompoundWordTokenFilter.DictionaryCompoundWordTokenFilter(Version, TokenStream, Set, int, int, int, boolean) instead |
org.apache.lucene.analysis.compound.DictionaryCompoundWordTokenFilter(TokenStream, String[])
use DictionaryCompoundWordTokenFilter.DictionaryCompoundWordTokenFilter(Version, TokenStream, String[]) instead |
org.apache.lucene.analysis.compound.DictionaryCompoundWordTokenFilter(TokenStream, String[], int, int, int, boolean)
use DictionaryCompoundWordTokenFilter.DictionaryCompoundWordTokenFilter(Version, TokenStream, String[], int, int, int, boolean) instead |
org.apache.lucene.analysis.compound.DictionaryCompoundWordTokenFilter(Version, TokenStream, String[])
Use the constructors taking Set |
org.apache.lucene.analysis.compound.DictionaryCompoundWordTokenFilter(Version, TokenStream, String[], int, int, int, boolean)
Use the constructors taking Set |
org.apache.lucene.analysis.nl.DutchAnalyzer(Version, File)
use DutchAnalyzer.DutchAnalyzer(Version, Set) instead |
org.apache.lucene.analysis.nl.DutchAnalyzer(Version, HashSet>)
use DutchAnalyzer.DutchAnalyzer(Version, Set) instead |
org.apache.lucene.analysis.nl.DutchAnalyzer(Version, String...)
use DutchAnalyzer.DutchAnalyzer(Version, Set) instead |
org.apache.lucene.analysis.nl.DutchStemFilter(TokenStream, Set>)
use KeywordAttribute with KeywordMarkerFilter instead. |
org.apache.lucene.analysis.nl.DutchStemFilter(TokenStream, Set>, Map, ?>)
use KeywordAttribute with KeywordMarkerFilter instead. |
org.apache.lucene.analysis.fr.ElisionFilter(TokenStream)
use ElisionFilter.ElisionFilter(Version, TokenStream) instead |
org.apache.lucene.analysis.fr.ElisionFilter(TokenStream, Set>)
use ElisionFilter.ElisionFilter(Version, TokenStream, Set) instead |
org.apache.lucene.analysis.fr.ElisionFilter(TokenStream, String[])
use ElisionFilter.ElisionFilter(Version, TokenStream, Set) instead |
org.apache.lucene.analysis.fr.FrenchAnalyzer(Version, File)
use FrenchAnalyzer.FrenchAnalyzer(Version, Set) instead |
org.apache.lucene.analysis.fr.FrenchAnalyzer(Version, String...)
use FrenchAnalyzer.FrenchAnalyzer(Version, Set) instead |
org.apache.lucene.analysis.fr.FrenchStemFilter(TokenStream, Set>)
use KeywordAttribute with KeywordMarkerFilter instead. |
org.apache.lucene.analysis.de.GermanAnalyzer(Version, File)
use GermanAnalyzer.GermanAnalyzer(Version, Set) |
org.apache.lucene.analysis.de.GermanAnalyzer(Version, Map, ?>)
use GermanAnalyzer.GermanAnalyzer(Version, Set) |
org.apache.lucene.analysis.de.GermanAnalyzer(Version, String...)
use GermanAnalyzer.GermanAnalyzer(Version, Set) |
org.apache.lucene.analysis.de.GermanStemFilter(TokenStream, Set>)
use KeywordAttribute with KeywordMarkerFilter instead. |
org.apache.lucene.analysis.el.GreekAnalyzer(Version, Map, ?>)
use GreekAnalyzer.GreekAnalyzer(Version, Set) instead |
org.apache.lucene.analysis.el.GreekAnalyzer(Version, String...)
use GreekAnalyzer.GreekAnalyzer(Version, Set) instead |
org.apache.lucene.analysis.el.GreekLowerCaseFilter(TokenStream)
Use GreekLowerCaseFilter.GreekLowerCaseFilter(Version, TokenStream) instead. |
org.apache.lucene.analysis.compound.HyphenationCompoundWordTokenFilter(TokenStream, HyphenationTree, Set>)
use HyphenationCompoundWordTokenFilter.HyphenationCompoundWordTokenFilter(Version, TokenStream, HyphenationTree, Set) instead. |
org.apache.lucene.analysis.compound.HyphenationCompoundWordTokenFilter(TokenStream, HyphenationTree, Set>, int, int, int, boolean)
use HyphenationCompoundWordTokenFilter.HyphenationCompoundWordTokenFilter(Version, TokenStream, HyphenationTree, Set, int, int, int, boolean) instead. |
org.apache.lucene.analysis.compound.HyphenationCompoundWordTokenFilter(TokenStream, HyphenationTree, String[])
use HyphenationCompoundWordTokenFilter.HyphenationCompoundWordTokenFilter(Version, TokenStream, HyphenationTree, String[]) instead. |
org.apache.lucene.analysis.compound.HyphenationCompoundWordTokenFilter(TokenStream, HyphenationTree, String[], int, int, int, boolean)
use HyphenationCompoundWordTokenFilter.HyphenationCompoundWordTokenFilter(Version, TokenStream, HyphenationTree, String[], int, int, int, boolean) instead. |
org.apache.lucene.analysis.compound.HyphenationCompoundWordTokenFilter(Version, TokenStream, HyphenationTree, String[])
Use the constructors taking Set |
org.apache.lucene.analysis.compound.HyphenationCompoundWordTokenFilter(Version, TokenStream, HyphenationTree, String[], int, int, int, boolean)
Use the constructors taking Set |
org.apache.lucene.analysis.fa.PersianAnalyzer(Version, File)
use PersianAnalyzer.PersianAnalyzer(Version, Set) instead |
org.apache.lucene.analysis.fa.PersianAnalyzer(Version, Hashtable, ?>)
use PersianAnalyzer.PersianAnalyzer(Version, Set) instead |
org.apache.lucene.analysis.fa.PersianAnalyzer(Version, String...)
use PersianAnalyzer.PersianAnalyzer(Version, Set) instead |
org.apache.lucene.analysis.query.QueryAutoStopWordAnalyzer(Version, Analyzer)
Stopwords should be calculated at instantiation using one of the other constructors |
org.apache.lucene.analysis.reverse.ReverseStringFilter(TokenStream)
use ReverseStringFilter.ReverseStringFilter(Version, TokenStream)
instead. This constructor will be removed in Lucene 4.0 |
org.apache.lucene.analysis.reverse.ReverseStringFilter(TokenStream, char)
use ReverseStringFilter.ReverseStringFilter(Version, TokenStream, char)
instead. This constructor will be removed in Lucene 4.0 |
org.apache.lucene.analysis.ru.RussianAnalyzer(Version, Map, ?>)
use RussianAnalyzer.RussianAnalyzer(Version, Set) instead |
org.apache.lucene.analysis.ru.RussianAnalyzer(Version, String...)
use RussianAnalyzer.RussianAnalyzer(Version, Set) instead |
org.apache.lucene.analysis.ru.RussianLetterTokenizer(AttributeSource.AttributeFactory, Reader)
use RussianLetterTokenizer.RussianLetterTokenizer(Version, AttributeSource.AttributeFactory, Reader)
instead. This will be removed in Lucene 4.0. |
org.apache.lucene.analysis.ru.RussianLetterTokenizer(AttributeSource, Reader)
use RussianLetterTokenizer.RussianLetterTokenizer(Version, AttributeSource, Reader)
instead. This will be removed in Lucene 4.0. |
org.apache.lucene.analysis.ru.RussianLetterTokenizer(Reader)
use RussianLetterTokenizer.RussianLetterTokenizer(Version, Reader) instead. This will
be removed in Lucene 4.0. |
org.apache.lucene.analysis.snowball.SnowballAnalyzer(Version, String, String[])
Use SnowballAnalyzer.SnowballAnalyzer(Version, String, Set) instead. |
org.apache.lucene.analysis.th.ThaiWordFilter(TokenStream)
Use the ctor with matchVersion instead! |