Package | Description |
---|---|
org.apache.lucene.analysis |
API and code to convert text into indexable/searchable tokens.
|
org.apache.lucene.analysis.ar |
Analyzer for Arabic.
|
org.apache.lucene.analysis.bg |
Analyzer for Bulgarian.
|
org.apache.lucene.analysis.br |
Analyzer for Brazilian Portuguese.
|
org.apache.lucene.analysis.ca |
Analyzer for Catalan.
|
org.apache.lucene.analysis.cjk |
Analyzer for Chinese, Japanese, and Korean, which indexes bigrams (overlapping groups of two adjacent Han characters).
|
org.apache.lucene.analysis.cn |
Analyzer for Chinese, which indexes unigrams (individual chinese characters).
|
org.apache.lucene.analysis.cn.smart |
Analyzer for Simplified Chinese, which indexes words.
|
org.apache.lucene.analysis.cz |
Analyzer for Czech.
|
org.apache.lucene.analysis.da |
Analyzer for Danish.
|
org.apache.lucene.analysis.de |
Analyzer for German.
|
org.apache.lucene.analysis.el |
Analyzer for Greek.
|
org.apache.lucene.analysis.en |
Analyzer for English.
|
org.apache.lucene.analysis.es |
Analyzer for Spanish.
|
org.apache.lucene.analysis.eu |
Analyzer for Basque.
|
org.apache.lucene.analysis.fa |
Analyzer for Persian.
|
org.apache.lucene.analysis.fi |
Analyzer for Finnish.
|
org.apache.lucene.analysis.fr |
Analyzer for French.
|
org.apache.lucene.analysis.ga |
Analysis for Irish.
|
org.apache.lucene.analysis.gl |
Analyzer for Galician.
|
org.apache.lucene.analysis.hi |
Analyzer for Hindi.
|
org.apache.lucene.analysis.hu |
Analyzer for Hungarian.
|
org.apache.lucene.analysis.hy |
Analyzer for Armenian.
|
org.apache.lucene.analysis.id |
Analyzer for Indonesian.
|
org.apache.lucene.analysis.it |
Analyzer for Italian.
|
org.apache.lucene.analysis.ja |
Analyzer for Japanese.
|
org.apache.lucene.analysis.lv |
Analyzer for Latvian.
|
org.apache.lucene.analysis.miscellaneous |
Miscellaneous TokenStreams
|
org.apache.lucene.analysis.nl |
Analyzer for Dutch.
|
org.apache.lucene.analysis.no |
Analyzer for Norwegian.
|
org.apache.lucene.analysis.pl |
Analyzer for Polish.
|
org.apache.lucene.analysis.pt |
Analyzer for Portuguese.
|
org.apache.lucene.analysis.query |
Automatically filter high-frequency stopwords.
|
org.apache.lucene.analysis.ro |
Analyzer for Romanian.
|
org.apache.lucene.analysis.ru |
Analyzer for Russian.
|
org.apache.lucene.analysis.shingle |
Word n-gram filters
|
org.apache.lucene.analysis.snowball |
TokenFilter and Analyzer implementations that use Snowball
stemmers. |
org.apache.lucene.analysis.standard |
Standards-based analyzers implemented with JFlex.
|
org.apache.lucene.analysis.sv |
Analyzer for Swedish.
|
org.apache.lucene.analysis.synonym |
Analysis components for Synonyms.
|
org.apache.lucene.analysis.th |
Analyzer for Thai.
|
org.apache.lucene.analysis.tr |
Analyzer for Turkish.
|
org.apache.lucene.benchmark.byTask |
Benchmarking Lucene By Tasks.
|
org.apache.lucene.benchmark.byTask.tasks |
Extendable benchmark tasks.
|
org.apache.lucene.collation |
CollationKeyFilter
converts each token into its binary CollationKey using the
provided Collator , and then encode the CollationKey
as a String using
IndexableBinaryStringTools , to allow it to be
stored as an index term. |
org.apache.lucene.index |
Code to maintain and access indices.
|
org.apache.lucene.index.memory |
High-performance single-document main memory Apache Lucene fulltext search index.
|
org.apache.lucene.queryParser |
A simple query parser implemented with JavaCC.
|
org.apache.lucene.queryParser.analyzing |
QueryParser that passes Fuzzy-, Prefix-, Range-, and WildcardQuerys through the given analyzer.
|
org.apache.lucene.queryParser.complexPhrase |
QueryParser which permits complex phrase query syntax eg "(john jon jonathan~) peters*"
|
org.apache.lucene.queryParser.ext |
Extendable QueryParser provides a simple and flexible extension mechanism by overloading query field names.
|
org.apache.lucene.queryParser.precedence |
This package contains the Precedence Query Parser Implementation
|
org.apache.lucene.queryParser.standard |
Contains the implementation of the Lucene query parser using the flexible query parser frameworks
|
org.apache.lucene.queryParser.standard.config |
Standard Lucene Query Configuration
|
org.apache.lucene.search |
Code to search indices.
|
org.apache.lucene.search.highlight |
The highlight package contains classes to provide "keyword in context" features
typically used to highlight search terms in the text of results pages.
|
org.apache.lucene.search.similar |
Document similarity query generators.
|
org.apache.lucene.store.instantiated |
InstantiatedIndex, alternative RAM store for small corpora.
|
org.apache.lucene.util |
Some utility classes.
|
org.apache.lucene.xmlparser |
Parser that produces Lucene Query objects from XML streams.
|
org.apache.lucene.xmlparser.builders |
Builders to support various Lucene queries.
|
Modifier and Type | Class and Description |
---|---|
class |
KeywordAnalyzer
"Tokenizes" the entire stream as a single token.
|
class |
LimitTokenCountAnalyzer
This Analyzer limits the number of tokens while indexing.
|
class |
MockAnalyzer
Analyzer for testing
|
class |
PerFieldAnalyzerWrapper
This analyzer is used to facilitate scenarios where different
fields require different analysis techniques.
|
class |
ReusableAnalyzerBase
An convenience subclass of Analyzer that makes it easy to implement
TokenStream reuse. |
class |
SimpleAnalyzer
|
class |
StopAnalyzer
|
class |
StopwordAnalyzerBase
Base class for Analyzers that need to make use of stopword sets.
|
class |
WhitespaceAnalyzer
An Analyzer that uses
WhitespaceTokenizer . |
Modifier and Type | Method and Description |
---|---|
void |
PerFieldAnalyzerWrapper.addAnalyzer(String fieldName,
Analyzer analyzer)
Deprecated.
Changing the Analyzer for a field after instantiation prevents
reusability. Analyzers for fields should be set during construction.
|
static void |
BaseTokenStreamTestCase.assertAnalyzesTo(Analyzer a,
String input,
String[] output) |
static void |
BaseTokenStreamTestCase.assertAnalyzesTo(Analyzer a,
String input,
String[] output,
int[] posIncrements) |
static void |
BaseTokenStreamTestCase.assertAnalyzesTo(Analyzer a,
String input,
String[] output,
int[] startOffsets,
int[] endOffsets) |
static void |
BaseTokenStreamTestCase.assertAnalyzesTo(Analyzer a,
String input,
String[] output,
int[] startOffsets,
int[] endOffsets,
int[] posIncrements) |
static void |
BaseTokenStreamTestCase.assertAnalyzesTo(Analyzer a,
String input,
String[] output,
int[] startOffsets,
int[] endOffsets,
String[] types,
int[] posIncrements) |
static void |
BaseTokenStreamTestCase.assertAnalyzesTo(Analyzer a,
String input,
String[] output,
int[] startOffsets,
int[] endOffsets,
String[] types,
int[] posIncrements,
int[] posLengths) |
static void |
BaseTokenStreamTestCase.assertAnalyzesTo(Analyzer a,
String input,
String[] output,
String[] types) |
static void |
BaseTokenStreamTestCase.assertAnalyzesToPositions(Analyzer a,
String input,
String[] output,
int[] posIncrements,
int[] posLengths) |
static void |
BaseTokenStreamTestCase.assertAnalyzesToReuse(Analyzer a,
String input,
String[] output) |
static void |
BaseTokenStreamTestCase.assertAnalyzesToReuse(Analyzer a,
String input,
String[] output,
int[] posIncrements) |
static void |
BaseTokenStreamTestCase.assertAnalyzesToReuse(Analyzer a,
String input,
String[] output,
int[] startOffsets,
int[] endOffsets) |
static void |
BaseTokenStreamTestCase.assertAnalyzesToReuse(Analyzer a,
String input,
String[] output,
int[] startOffsets,
int[] endOffsets,
int[] posIncrements) |
static void |
BaseTokenStreamTestCase.assertAnalyzesToReuse(Analyzer a,
String input,
String[] output,
int[] startOffsets,
int[] endOffsets,
String[] types,
int[] posIncrements) |
static void |
BaseTokenStreamTestCase.assertAnalyzesToReuse(Analyzer a,
String input,
String[] output,
String[] types) |
void |
CollationTestBase.assertThreadSafe(Analyzer analyzer) |
static void |
VocabularyAssert.assertVocabulary(Analyzer a,
File zipFile,
String vocOut)
Run a vocabulary test against a tab-separated data file inside a zip file
|
static void |
VocabularyAssert.assertVocabulary(Analyzer a,
File zipFile,
String voc,
String out)
Run a vocabulary test against two data files inside a zip file
|
static void |
VocabularyAssert.assertVocabulary(Analyzer a,
InputStream vocOut)
Run a vocabulary test against one file: tab separated.
|
static void |
VocabularyAssert.assertVocabulary(Analyzer a,
InputStream voc,
InputStream out)
Run a vocabulary test against two data files.
|
static void |
BaseTokenStreamTestCase.checkAnalysisConsistency(Random random,
Analyzer a,
boolean useCharFilter,
String text) |
static void |
BaseTokenStreamTestCase.checkOneTerm(Analyzer a,
String input,
String expected) |
static void |
BaseTokenStreamTestCase.checkOneTermReuse(Analyzer a,
String input,
String expected) |
static void |
BaseTokenStreamTestCase.checkRandomData(Random random,
Analyzer a,
int iterations)
utility method for blasting tokenstreams with data to make sure they don't do anything crazy
|
static void |
BaseTokenStreamTestCase.checkRandomData(Random random,
Analyzer a,
int iterations,
boolean simple)
utility method for blasting tokenstreams with data to make sure they don't do anything crazy
|
static void |
BaseTokenStreamTestCase.checkRandomData(Random random,
Analyzer a,
int iterations,
int maxWordLength)
utility method for blasting tokenstreams with data to make sure they don't do anything crazy
|
static void |
BaseTokenStreamTestCase.checkRandomData(Random random,
Analyzer a,
int iterations,
int maxWordLength,
boolean simple) |
void |
CollationTestBase.testCollationKeySort(Analyzer usAnalyzer,
Analyzer franceAnalyzer,
Analyzer swedenAnalyzer,
Analyzer denmarkAnalyzer,
String usResult,
String frResult,
String svResult,
String dkResult) |
void |
CollationTestBase.testFarsiRangeFilterCollating(Analyzer analyzer,
String firstBeg,
String firstEnd,
String secondBeg,
String secondEnd) |
void |
CollationTestBase.testFarsiRangeQueryCollating(Analyzer analyzer,
String firstBeg,
String firstEnd,
String secondBeg,
String secondEnd) |
void |
CollationTestBase.testFarsiTermRangeQuery(Analyzer analyzer,
String firstBeg,
String firstEnd,
String secondBeg,
String secondEnd) |
protected String |
BaseTokenStreamTestCase.toDot(Analyzer a,
String inputText) |
protected void |
BaseTokenStreamTestCase.toDotFile(Analyzer a,
String inputText,
String localFileName) |
Constructor and Description |
---|
LimitTokenCountAnalyzer(Analyzer delegate,
int maxTokenCount)
Build an analyzer that limits the maximum number of tokens per field.
|
PerFieldAnalyzerWrapper(Analyzer defaultAnalyzer)
Constructs with default analyzer.
|
PerFieldAnalyzerWrapper(Analyzer defaultAnalyzer,
Map<String,Analyzer> fieldAnalyzers)
Constructs with default analyzer and a map of analyzers to use for
specific fields.
|
Constructor and Description |
---|
PerFieldAnalyzerWrapper(Analyzer defaultAnalyzer,
Map<String,Analyzer> fieldAnalyzers)
Constructs with default analyzer and a map of analyzers to use for
specific fields.
|
Modifier and Type | Class and Description |
---|---|
class |
ArabicAnalyzer
Analyzer for Arabic. |
Modifier and Type | Class and Description |
---|---|
class |
BulgarianAnalyzer
Analyzer for Bulgarian. |
Modifier and Type | Class and Description |
---|---|
class |
BrazilianAnalyzer
Analyzer for Brazilian Portuguese language. |
Modifier and Type | Class and Description |
---|---|
class |
CatalanAnalyzer
Analyzer for Catalan. |
Modifier and Type | Class and Description |
---|---|
class |
CJKAnalyzer
An
Analyzer that tokenizes text with StandardTokenizer ,
normalizes content with CJKWidthFilter , folds case with
LowerCaseFilter , forms bigrams of CJK with CJKBigramFilter ,
and filters stopwords with StopFilter |
Modifier and Type | Class and Description |
---|---|
class |
ChineseAnalyzer
Deprecated.
Use
StandardAnalyzer instead, which has the same functionality.
This analyzer will be removed in Lucene 5.0 |
Modifier and Type | Class and Description |
---|---|
class |
SmartChineseAnalyzer
SmartChineseAnalyzer is an analyzer for Chinese or mixed Chinese-English text.
|
Modifier and Type | Class and Description |
---|---|
class |
CzechAnalyzer
Analyzer for Czech language. |
Modifier and Type | Class and Description |
---|---|
class |
DanishAnalyzer
Analyzer for Danish. |
Modifier and Type | Class and Description |
---|---|
class |
GermanAnalyzer
Analyzer for German language. |
Modifier and Type | Class and Description |
---|---|
class |
GreekAnalyzer
Analyzer for the Greek language. |
Modifier and Type | Class and Description |
---|---|
class |
EnglishAnalyzer
Analyzer for English. |
Modifier and Type | Class and Description |
---|---|
class |
SpanishAnalyzer
Analyzer for Spanish. |
Modifier and Type | Class and Description |
---|---|
class |
BasqueAnalyzer
Analyzer for Basque. |
Modifier and Type | Class and Description |
---|---|
class |
PersianAnalyzer
Analyzer for Persian. |
Modifier and Type | Class and Description |
---|---|
class |
FinnishAnalyzer
Analyzer for Finnish. |
Modifier and Type | Class and Description |
---|---|
class |
FrenchAnalyzer
Analyzer for French language. |
Modifier and Type | Class and Description |
---|---|
class |
IrishAnalyzer
Analyzer for Irish. |
Modifier and Type | Class and Description |
---|---|
class |
GalicianAnalyzer
Analyzer for Galician. |
Modifier and Type | Class and Description |
---|---|
class |
HindiAnalyzer
Analyzer for Hindi.
|
Modifier and Type | Class and Description |
---|---|
class |
HungarianAnalyzer
Analyzer for Hungarian. |
Modifier and Type | Class and Description |
---|---|
class |
ArmenianAnalyzer
Analyzer for Armenian. |
Modifier and Type | Class and Description |
---|---|
class |
IndonesianAnalyzer
Analyzer for Indonesian (Bahasa)
|
Modifier and Type | Class and Description |
---|---|
class |
ItalianAnalyzer
Analyzer for Italian. |
Modifier and Type | Class and Description |
---|---|
class |
JapaneseAnalyzer
Analyzer for Japanese that uses morphological analysis.
|
Modifier and Type | Class and Description |
---|---|
class |
LatvianAnalyzer
Analyzer for Latvian. |
Modifier and Type | Class and Description |
---|---|
class |
PatternAnalyzer
Efficient Lucene analyzer/tokenizer that preferably operates on a String rather than a
Reader , that can flexibly separate text into terms via a regular expression Pattern
(with behaviour identical to String.split(String) ),
and that combines the functionality of
LetterTokenizer ,
LowerCaseTokenizer ,
WhitespaceTokenizer ,
StopFilter into a single efficient
multi-purpose class. |
Modifier and Type | Class and Description |
---|---|
class |
DutchAnalyzer
Analyzer for Dutch language. |
Modifier and Type | Class and Description |
---|---|
class |
NorwegianAnalyzer
Analyzer for Norwegian. |
Modifier and Type | Class and Description |
---|---|
class |
PolishAnalyzer
Analyzer for Polish. |
Modifier and Type | Class and Description |
---|---|
class |
PortugueseAnalyzer
Analyzer for Portuguese. |
Modifier and Type | Class and Description |
---|---|
class |
QueryAutoStopWordAnalyzer
An
Analyzer used primarily at query time to wrap another analyzer and provide a layer of protection
which prevents very common words from being passed into queries. |
Constructor and Description |
---|
QueryAutoStopWordAnalyzer(Version matchVersion,
Analyzer delegate)
Deprecated.
Stopwords should be calculated at instantiation using one of the other constructors
|
QueryAutoStopWordAnalyzer(Version matchVersion,
Analyzer delegate,
IndexReader indexReader)
Creates a new QueryAutoStopWordAnalyzer with stopwords calculated for all
indexed fields from terms with a document frequency percentage greater than
QueryAutoStopWordAnalyzer.defaultMaxDocFreqPercent |
QueryAutoStopWordAnalyzer(Version matchVersion,
Analyzer delegate,
IndexReader indexReader,
Collection<String> fields,
float maxPercentDocs)
Creates a new QueryAutoStopWordAnalyzer with stopwords calculated for the
given selection of fields from terms with a document frequency percentage
greater than the given maxPercentDocs
|
QueryAutoStopWordAnalyzer(Version matchVersion,
Analyzer delegate,
IndexReader indexReader,
Collection<String> fields,
int maxDocFreq)
Creates a new QueryAutoStopWordAnalyzer with stopwords calculated for the
given selection of fields from terms with a document frequency greater than
the given maxDocFreq
|
QueryAutoStopWordAnalyzer(Version matchVersion,
Analyzer delegate,
IndexReader indexReader,
float maxPercentDocs)
Creates a new QueryAutoStopWordAnalyzer with stopwords calculated for all
indexed fields from terms with a document frequency percentage greater than
the given maxPercentDocs
|
QueryAutoStopWordAnalyzer(Version matchVersion,
Analyzer delegate,
IndexReader indexReader,
int maxDocFreq)
Creates a new QueryAutoStopWordAnalyzer with stopwords calculated for all
indexed fields from terms with a document frequency greater than the given
maxDocFreq
|
Modifier and Type | Class and Description |
---|---|
class |
RomanianAnalyzer
Analyzer for Romanian. |
Modifier and Type | Class and Description |
---|---|
class |
RussianAnalyzer
Analyzer for Russian language. |
Modifier and Type | Class and Description |
---|---|
class |
ShingleAnalyzerWrapper
A ShingleAnalyzerWrapper wraps a
ShingleFilter around another Analyzer . |
Constructor and Description |
---|
ShingleAnalyzerWrapper(Analyzer defaultAnalyzer) |
ShingleAnalyzerWrapper(Analyzer defaultAnalyzer,
int maxShingleSize) |
ShingleAnalyzerWrapper(Analyzer defaultAnalyzer,
int minShingleSize,
int maxShingleSize) |
ShingleAnalyzerWrapper(Analyzer defaultAnalyzer,
int minShingleSize,
int maxShingleSize,
String tokenSeparator,
boolean outputUnigrams,
boolean outputUnigramsIfNoShingles)
Creates a new ShingleAnalyzerWrapper
|
Modifier and Type | Class and Description |
---|---|
class |
SnowballAnalyzer
Deprecated.
Use the language-specific analyzer in contrib/analyzers instead.
This analyzer will be removed in Lucene 5.0
|
Modifier and Type | Class and Description |
---|---|
class |
ClassicAnalyzer
Filters
ClassicTokenizer with ClassicFilter , LowerCaseFilter and StopFilter , using a list of
English stop words. |
class |
StandardAnalyzer
Filters
StandardTokenizer with StandardFilter , LowerCaseFilter and StopFilter , using a list of
English stop words. |
class |
UAX29URLEmailAnalyzer
Filters
UAX29URLEmailTokenizer
with StandardFilter ,
LowerCaseFilter and
StopFilter , using a list of
English stop words. |
Modifier and Type | Class and Description |
---|---|
class |
SwedishAnalyzer
Analyzer for Swedish. |
Modifier and Type | Method and Description |
---|---|
static CharsRef |
SynonymMap.Builder.analyze(Analyzer analyzer,
String text,
CharsRef reuse)
Sugar: analyzes the text with the analyzer and
separates by
SynonymMap.WORD_SEPARATOR . |
Constructor and Description |
---|
SolrSynonymParser(boolean dedup,
boolean expand,
Analyzer analyzer) |
WordnetSynonymParser(boolean dedup,
boolean expand,
Analyzer analyzer) |
Modifier and Type | Class and Description |
---|---|
class |
ThaiAnalyzer
Analyzer for Thai language. |
Modifier and Type | Class and Description |
---|---|
class |
TurkishAnalyzer
Analyzer for Turkish. |
Modifier and Type | Method and Description |
---|---|
Analyzer |
PerfRunData.getAnalyzer() |
Modifier and Type | Method and Description |
---|---|
void |
PerfRunData.setAnalyzer(Analyzer analyzer) |
Modifier and Type | Method and Description |
---|---|
static Analyzer |
NewAnalyzerTask.createAnalyzer(String className) |
Modifier and Type | Method and Description |
---|---|
abstract int |
BenchmarkHighlighter.doHighlight(IndexReader reader,
int doc,
String field,
Document document,
Analyzer analyzer,
String text) |
Modifier and Type | Class and Description |
---|---|
class |
CollationKeyAnalyzer
Filters
KeywordTokenizer with CollationKeyFilter . |
class |
ICUCollationKeyAnalyzer
Filters
KeywordTokenizer with ICUCollationKeyFilter . |
Modifier and Type | Method and Description |
---|---|
Analyzer |
IndexWriterConfig.getAnalyzer()
Returns the default analyzer to use for indexing documents.
|
Analyzer |
IndexWriter.getAnalyzer()
Returns the analyzer used by this index.
|
Modifier and Type | Method and Description |
---|---|
void |
IndexWriter.addDocument(Document doc,
Analyzer analyzer)
Adds a document to this index, using the provided analyzer instead of the
value of
IndexWriter.getAnalyzer() . |
void |
IndexWriter.addDocuments(Collection<Document> docs,
Analyzer analyzer)
Atomically adds a block of documents, analyzed using the
provided analyzer, with sequentially assigned document
IDs, such that an external reader will see all or none
of the documents.
|
void |
IndexWriter.updateDocument(Term term,
Document doc,
Analyzer analyzer)
Updates a document by first deleting the document(s)
containing
term and then adding the new
document. |
void |
IndexWriter.updateDocuments(Term delTerm,
Collection<Document> docs,
Analyzer analyzer)
Atomically deletes documents matching the provided
delTerm and adds a block of documents, analyzed using
the provided analyzer, with sequentially
assigned document IDs, such that an external reader
will see all or none of the documents.
|
Constructor and Description |
---|
IndexWriter(Directory d,
Analyzer a,
boolean create,
IndexDeletionPolicy deletionPolicy,
IndexWriter.MaxFieldLength mfl)
Deprecated.
|
IndexWriter(Directory d,
Analyzer a,
boolean create,
IndexWriter.MaxFieldLength mfl)
Deprecated.
|
IndexWriter(Directory d,
Analyzer a,
IndexDeletionPolicy deletionPolicy,
IndexWriter.MaxFieldLength mfl)
Deprecated.
|
IndexWriter(Directory d,
Analyzer a,
IndexDeletionPolicy deletionPolicy,
IndexWriter.MaxFieldLength mfl,
IndexCommit commit)
Deprecated.
|
IndexWriter(Directory d,
Analyzer a,
IndexWriter.MaxFieldLength mfl)
Deprecated.
|
IndexWriterConfig(Version matchVersion,
Analyzer analyzer)
|
RandomIndexWriter(Random r,
Directory dir,
Analyzer a)
create a RandomIndexWriter with a random config: Uses TEST_VERSION_CURRENT
|
RandomIndexWriter(Random r,
Directory dir,
Version v,
Analyzer a)
create a RandomIndexWriter with a random config
|
Modifier and Type | Method and Description |
---|---|
void |
MemoryIndex.addField(String fieldName,
String text,
Analyzer analyzer)
Convenience method; Tokenizes the given field text and adds the resulting
terms to the index; Equivalent to adding an indexed non-keyword Lucene
Field that is
tokenized ,
not stored ,
termVectorStored with positions (or
termVectorStored with positions and offsets ), |
Modifier and Type | Class and Description |
---|---|
static class |
QueryParserTestBase.QPTestAnalyzer
Filters LowerCaseTokenizer with QPTestFilter.
|
Modifier and Type | Field and Description |
---|---|
static Analyzer |
QueryParserTestBase.qpAnalyzer |
Modifier and Type | Method and Description |
---|---|
Analyzer |
QueryParser.getAnalyzer() |
Modifier and Type | Method and Description |
---|---|
void |
QueryParserTestBase.assertEscapedQueryEquals(String query,
Analyzer a,
String result) |
void |
QueryParserTestBase.assertQueryEquals(String query,
Analyzer a,
String result) |
void |
QueryParserTestBase.assertQueryEqualsDOA(String query,
Analyzer a,
String result) |
abstract QueryParser |
QueryParserTestBase.getParser(Analyzer a) |
Query |
QueryParserTestBase.getQuery(String query,
Analyzer a) |
Query |
QueryParserTestBase.getQueryDOA(String query,
Analyzer a) |
static Query |
MultiFieldQueryParser.parse(Version matchVersion,
String[] queries,
String[] fields,
Analyzer analyzer)
Parses a query which searches on the fields specified.
|
static Query |
MultiFieldQueryParser.parse(Version matchVersion,
String[] queries,
String[] fields,
BooleanClause.Occur[] flags,
Analyzer analyzer)
Parses a query, searching on the fields specified.
|
static Query |
MultiFieldQueryParser.parse(Version matchVersion,
String query,
String[] fields,
BooleanClause.Occur[] flags,
Analyzer analyzer)
Parses a query, searching on the fields specified.
|
Constructor and Description |
---|
MultiFieldQueryParser(Version matchVersion,
String[] fields,
Analyzer analyzer)
Creates a MultiFieldQueryParser.
|
MultiFieldQueryParser(Version matchVersion,
String[] fields,
Analyzer analyzer,
Map<String,Float> boosts)
Creates a MultiFieldQueryParser.
|
QueryParser(Version matchVersion,
String f,
Analyzer a)
Constructs a query parser.
|
QueryParserTestBase.QPTestParser(String f,
Analyzer a) |
Constructor and Description |
---|
AnalyzingQueryParser(Version matchVersion,
String field,
Analyzer analyzer)
Constructs a query parser.
|
Constructor and Description |
---|
ComplexPhraseQueryParser(Version matchVersion,
String f,
Analyzer a) |
Constructor and Description |
---|
ExtendableQueryParser(Version matchVersion,
String f,
Analyzer a)
Creates a new
ExtendableQueryParser instance |
ExtendableQueryParser(Version matchVersion,
String f,
Analyzer a,
Extensions ext)
Creates a new
ExtendableQueryParser instance |
Constructor and Description |
---|
PrecedenceQueryParser(Analyzer analyer) |
Modifier and Type | Method and Description |
---|---|
Analyzer |
QueryParserWrapper.getAnalyzer()
Deprecated.
|
Analyzer |
StandardQueryParser.getAnalyzer() |
Modifier and Type | Method and Description |
---|---|
static Query |
MultiFieldQueryParserWrapper.parse(String[] queries,
String[] fields,
Analyzer analyzer)
Deprecated.
Parses a query which searches on the fields specified.
|
static Query |
QueryParserUtil.parse(String[] queries,
String[] fields,
Analyzer analyzer)
Parses a query which searches on the fields specified.
|
static Query |
MultiFieldQueryParserWrapper.parse(String[] queries,
String[] fields,
BooleanClause.Occur[] flags,
Analyzer analyzer)
Deprecated.
Parses a query, searching on the fields specified.
|
static Query |
QueryParserUtil.parse(String[] queries,
String[] fields,
BooleanClause.Occur[] flags,
Analyzer analyzer)
Parses a query, searching on the fields specified.
|
static Query |
MultiFieldQueryParserWrapper.parse(String query,
String[] fields,
BooleanClause.Occur[] flags,
Analyzer analyzer)
Deprecated.
Parses a query, searching on the fields specified.
|
static Query |
QueryParserUtil.parse(String query,
String[] fields,
BooleanClause.Occur[] flags,
Analyzer analyzer)
Parses a query, searching on the fields specified.
|
void |
StandardQueryParser.setAnalyzer(Analyzer analyzer) |
Constructor and Description |
---|
MultiFieldQueryParserWrapper(String[] fields,
Analyzer analyzer)
Deprecated.
Creates a MultiFieldQueryParser.
|
MultiFieldQueryParserWrapper(String[] fields,
Analyzer analyzer,
Map<String,Float> boosts)
Deprecated.
Creates a MultiFieldQueryParser.
|
QueryParserWrapper(String defaultField,
Analyzer analyzer)
Deprecated.
|
StandardQueryParser(Analyzer analyzer)
Constructs a
StandardQueryParser object and sets an
Analyzer to it. |
Modifier and Type | Field and Description |
---|---|
static ConfigurationKey<Analyzer> |
StandardQueryConfigHandler.ConfigurationKeys.ANALYZER
Key used to set the
Analyzer used for terms found in the query |
Modifier and Type | Method and Description |
---|---|
Analyzer |
AnalyzerAttribute.getAnalyzer()
Deprecated.
|
Analyzer |
AnalyzerAttributeImpl.getAnalyzer()
Deprecated.
|
Modifier and Type | Method and Description |
---|---|
void |
AnalyzerAttribute.setAnalyzer(Analyzer analyzer)
Deprecated.
|
void |
AnalyzerAttributeImpl.setAnalyzer(Analyzer analyzer)
Deprecated.
|
Modifier and Type | Field and Description |
---|---|
protected static Analyzer |
SearchEquivalenceTestBase.analyzer |
Modifier and Type | Method and Description |
---|---|
long |
NRTManager.TrackingIndexWriter.addDocument(Document d,
Analyzer a) |
long |
NRTManager.TrackingIndexWriter.addDocuments(Collection<Document> docs,
Analyzer a) |
long |
NRTManager.TrackingIndexWriter.updateDocument(Term t,
Document d,
Analyzer a) |
long |
NRTManager.TrackingIndexWriter.updateDocuments(Term t,
Collection<Document> docs,
Analyzer a) |
Constructor and Description |
---|
FuzzyLikeThisQuery(int maxNumTerms,
Analyzer analyzer) |
QueryTermVector(String queryString,
Analyzer analyzer) |
Modifier and Type | Method and Description |
---|---|
static TokenStream |
TokenSources.getAnyTokenStream(IndexReader reader,
int docId,
String field,
Analyzer analyzer)
A convenience method that tries a number of approaches to getting a token
stream.
|
static TokenStream |
TokenSources.getAnyTokenStream(IndexReader reader,
int docId,
String field,
Document doc,
Analyzer analyzer)
A convenience method that tries to first get a TermPositionVector for the
specified docId, then, falls back to using the passed in
Document to retrieve the TokenStream. |
String |
Highlighter.getBestFragment(Analyzer analyzer,
String fieldName,
String text)
Highlights chosen terms in a text, extracting the most relevant section.
|
String[] |
Highlighter.getBestFragments(Analyzer analyzer,
String fieldName,
String text,
int maxNumFragments)
Highlights chosen terms in a text, extracting the most relevant sections.
|
static TokenStream |
TokenSources.getTokenStream(Document doc,
String field,
Analyzer analyzer) |
static TokenStream |
TokenSources.getTokenStream(IndexReader reader,
int docId,
String field,
Analyzer analyzer) |
static TokenStream |
TokenSources.getTokenStream(String field,
String contents,
Analyzer analyzer) |
Modifier and Type | Field and Description |
---|---|
static Analyzer |
MoreLikeThis.DEFAULT_ANALYZER
Deprecated.
This default will be removed in Lucene 4.0 (with the default
being null). If you are not using term vectors, explicitly set
your analyzer instead.
|
Modifier and Type | Method and Description |
---|---|
Analyzer |
MoreLikeThis.getAnalyzer()
Returns an analyzer that will be used to parse source doc with.
|
Analyzer |
MoreLikeThisQuery.getAnalyzer() |
Modifier and Type | Method and Description |
---|---|
static Query |
SimilarityQueries.formSimilarQuery(String body,
Analyzer a,
String field,
Set<?> stop)
Simple similarity query generators.
|
void |
MoreLikeThis.setAnalyzer(Analyzer analyzer)
Sets the analyzer to use.
|
void |
MoreLikeThisQuery.setAnalyzer(Analyzer analyzer) |
Constructor and Description |
---|
MoreLikeThisQuery(String likeText,
String[] moreLikeFields,
Analyzer analyzer)
Deprecated.
|
MoreLikeThisQuery(String likeText,
String[] moreLikeFields,
Analyzer analyzer,
String fieldName) |
Modifier and Type | Method and Description |
---|---|
Analyzer |
InstantiatedIndexWriter.getAnalyzer()
Deprecated.
|
Modifier and Type | Method and Description |
---|---|
void |
InstantiatedIndexWriter.addDocument(Document doc,
Analyzer analyzer)
Deprecated.
Adds a document to this index, using the provided analyzer instead of the
value of
InstantiatedIndexWriter.getAnalyzer() . |
protected void |
InstantiatedIndexWriter.addDocument(InstantiatedDocument document,
Analyzer analyzer)
Deprecated.
Tokenizes a document and adds it to the buffer.
|
InstantiatedIndexWriter |
InstantiatedIndex.indexWriterFactory(Analyzer analyzer,
boolean create)
Deprecated.
|
void |
InstantiatedIndexWriter.updateDocument(Term term,
Document doc,
Analyzer analyzer)
Deprecated.
|
Constructor and Description |
---|
InstantiatedIndexWriter(InstantiatedIndex index,
Analyzer analyzer)
Deprecated.
|
InstantiatedIndexWriter(InstantiatedIndex index,
Analyzer analyzer,
boolean create)
Deprecated.
|
Modifier and Type | Method and Description |
---|---|
static IndexWriterConfig |
LuceneTestCase.newIndexWriterConfig(Random r,
Version v,
Analyzer a)
create a new index writer config with random defaults using the specified random
|
static IndexWriterConfig |
LuceneTestCase.newIndexWriterConfig(Version v,
Analyzer a)
create a new index writer config with random defaults
|
Modifier and Type | Field and Description |
---|---|
protected Analyzer |
CoreParser.analyzer |
Constructor and Description |
---|
CoreParser(Analyzer analyzer,
QueryParser parser)
Construct an XML parser that uses a single instance QueryParser for handling
UserQuery tags - all parse operations are synchronised on this parser
|
CoreParser(String defaultField,
Analyzer analyzer)
Constructs an XML parser that creates a QueryParser for each UserQuery request.
|
CoreParser(String defaultField,
Analyzer analyzer,
QueryParser parser) |
CorePlusExtensionsParser(Analyzer analyzer,
QueryParser parser)
Construct an XML parser that uses a single instance QueryParser for handling
UserQuery tags - all parse operations are synchronized on this parser
|
CorePlusExtensionsParser(String defaultField,
Analyzer analyzer)
Constructs an XML parser that creates a QueryParser for each UserQuery request.
|
Modifier and Type | Method and Description |
---|---|
protected QueryParser |
UserInputQueryBuilder.createQueryParser(String fieldName,
Analyzer analyzer)
Method to create a QueryParser - designed to be overridden
|
Constructor and Description |
---|
FuzzyLikeThisQueryBuilder(Analyzer analyzer) |
LikeThisQueryBuilder(Analyzer analyzer,
String[] defaultFieldNames) |
SpanOrTermsBuilder(Analyzer analyzer) |
TermsFilterBuilder(Analyzer analyzer) |
TermsQueryBuilder(Analyzer analyzer) |
UserInputQueryBuilder(String defaultField,
Analyzer analyzer) |