|
||||||||||
PREV NEXT | FRAMES NO FRAMES |
FilteringTokenFilter.incrementToken()
.
AttributeSource
shall be stored
in the sink.
SpanQuery
.
true
if this collector does not
require the matching docIDs to be delivered in int sort
order (smallest to largest) to Collector.collect(int)
.
FilteredTermEnum.setEnum(org.apache.lucene.index.TermEnum)
ParserExtension
instance associated with the given key.
IndexWriter.getAnalyzer()
.
InstantiatedIndexWriter.getAnalyzer()
.
Field
that is
tokenized
,
not stored
,
termVectorStored with positions
(or
termVectorStored with positions and offsets
),
addField(fieldName, stream, 1.0f)
.
Field
.
CompoundFileWriter.addFile(String)
, only for files that are found in an
external Directory
.
IndexWriter.addIndexes(Directory...)
instead
List
interface, so use
QueryNodeProcessorPipeline.add(QueryNodeProcessor)
instead
IndexReader.ReaderFinishedListener
.
TeeSinkTokenFilter.SinkTokenStream
created by another TeeSinkTokenFilter
to this one.
PerfTask
Object.getClass()
).
TermVectorEntry
.
AllGroupsCollector
AllGroupsCollector
.
AllowLeadingWildcardProcessor
processor and
must be defined in the QueryConfigHandler
.AllowLeadingWildcardProcessor
processor and
must be defined in the QueryConfigHandler
.AllowLeadingWildcardAttribute
is defined in the
QueryConfigHandler
.AnalyzerQueryNodeProcessor
processor and
must be defined in the QueryConfigHandler
.AnalyzerQueryNodeProcessor
processor and
must be defined in the QueryConfigHandler
.AnalyzerQueryNodeProcessor
is defined in the QueryConfigHandler
.*
)
don't get removed from the search terms.AndQueryNode
represents an AND boolean operation performed on a
list of nodes.AnyQueryNode
represents an ANY operator performed on a list of
nodes.String
to this character sequence.
StringBuilder
to this character sequence.
CharTermAttribute
to this character sequence.
Analyzer
for Arabic.ArabicAnalyzer.DEFAULT_STOPWORD_FILE
.
ArabicAnalyzer.ArabicAnalyzer(Version, Set)
instead
ArabicAnalyzer.ArabicAnalyzer(Version, Set)
instead
ArabicAnalyzer.ArabicAnalyzer(Version, Set)
instead
StandardTokenizer
instead.AttributeSource
.
AttributeSource.AttributeFactory
.
ArabicLetterTokenizer.ArabicLetterTokenizer(Version, Reader)
instead. This will
be removed in Lucene 4.0.
ArabicLetterTokenizer.ArabicLetterTokenizer(Version, AttributeSource, Reader)
instead. This will be removed in Lucene 4.0.
ArabicLetterTokenizer.ArabicLetterTokenizer(Version, AttributeSource.AttributeFactory, Reader)
instead. This will be removed in Lucene 4.0.
TokenFilter
that applies ArabicNormalizer
to normalize the orthography.TokenFilter
that applies ArabicStemmer
to stem Arabic words..Analyzer
for Armenian.ArmenianAnalyzer.DEFAULT_STOPWORD_FILE
.
List
view.
Set
view.
AttributeSource
.AttributeSource
or AttributeImpl
.AttributeImpl
s,
and methods to add and get them.AttributeSource.AttributeFactory.DEFAULT_ATTRIBUTE_FACTORY
.
AttributeSource.AttributeFactory
for creating new Attribute
instances.
AttributeImpl
s.CharFilter
.
(x <= min) ? base : sqrt(x+(base**2)-min)
...but with a special case check for 0.
Analyzer
for Basque.BasqueAnalyzer.DEFAULT_STOPWORD_FILE
.
n
bits.
name
in Directory
d
, as written by the BitVector.write(org.apache.lucene.store.Directory, java.lang.String)
method.
BooleanModifierNode
has the same behaviour as
ModifierQueryNode
, it only indicates that this modifier was added by
GroupQueryNodeProcessor
and not by the user.ModifierQueryNode
to BooleanQueryNode
s children.BooleanQuery.getMaxClauseCount()
clauses.BooleanQueryNode
represents a list of elements which do not have an
explicit boolean operator defined between them.BooleanQuery
object from a BooleanQueryNode
object.BooleanQueryNode
that contains only one
child and returns this child.MultiFieldQueryNodeProcessor
processor and
it should be defined in a FieldConfig
.MultiFieldQueryNodeProcessor
processor and
it should be defined in a FieldConfig
.BoostQueryNode
boosts the QueryNode tree which is under this node.Query
object set on the
BoostQueryNode
child using
QueryTreeBuilder.QUERY_TREE_BUILDER_TAGID
and applies the boost value
defined in the BoostQueryNode
.FieldableNode
that has the attribute BoostAttribute
in its
config.BrazilianAnalyzer.getDefaultStopSet()
instead
Analyzer
for Brazilian Portuguese language.BrazilianAnalyzer.getDefaultStopSet()
).
BrazilianAnalyzer.BrazilianAnalyzer(Version, Set)
instead
BrazilianAnalyzer.BrazilianAnalyzer(Version, Set)
instead
BrazilianAnalyzer.BrazilianAnalyzer(Version, Set)
instead
TokenFilter
that applies BrazilianStemmer
.KeywordAttribute
with KeywordMarkerFilter
instead.
char[]
buffer size)
for encoding int
values.
char[]
buffer size)
for encoding long
values.
IndexInput
.IndexOutput
.Analyzer
for Bulgarian.BulgarianAnalyzer.DEFAULT_STOPWORD_FILE
.
TokenFilter
that applies BulgarianStemmer
to stem Bulgarian
words.FieldCache
using getBytes()
and makes those values
available as other numeric types, casting as needed.FieldCacheSource
, already knowing that cache and field are equal.
FieldCacheSource
, without the hash-codes of the field
and the cache (those are taken care of elsewhere).
CachingWrapperFilter.DeletesMode.RECACHE
.
Analyzer
for Catalan.CatalanAnalyzer.DEFAULT_STOPWORD_FILE
.
Filter
s to be chained.CharacterUtils
provides a unified interface to Character-related
operations to implement backwards compatible character operations based on a
Version
instance.CharacterUtils.fill(CharacterBuffer, Reader)
.CharArraySet.CharArraySet(Version, int, boolean)
instead
CharArraySet.CharArraySet(Version, Collection, boolean)
instead
char[]
instances.IdentityEncoder.charset
instead.
CharStream.correctOffset(int)
functionality over Reader
.CharTokenizer
instance
CharTokenizer
instance
CharTokenizer
instance
CharTokenizer.CharTokenizer(Version, Reader)
instead. This will be
removed in Lucene 4.0.
CharTokenizer.CharTokenizer(Version, AttributeSource, Reader)
instead. This will be
removed in Lucene 4.0.
CharTokenizer.CharTokenizer(Version, AttributeSource.AttributeFactory, Reader)
instead. This will be
removed in Lucene 4.0.
CheckIndex.Status
instance detailing
the state of the index.
CheckIndex.Status
instance detailing
the state of the index.
CheckIndex.checkIndex()
detailing the health and status of the index.position
.
IllegalStateException
if it is.
StandardAnalyzer
instead, which has the same functionality.
This analyzer will be removed in Lucene 5.0StopFilter
instead, which has the same functionality.
This filter will be removed in Lucene 5.0StandardTokenizer
instead, which has the same functionality.
This filter will be removed in Lucene 5.0Analyzer
that tokenizes text with CJKTokenizer
and
filters with StopFilter
CJKAnalyzer.getDefaultStopSet()
.
CJKAnalyzer.CJKAnalyzer(Version, Set)
instead
ClassicTokenizer
with ClassicFilter
, LowerCaseFilter
and StopFilter
, using a list of
English stop words.ClassicAnalyzer.STOP_WORDS_SET
).
ClassicTokenizer
.ClassicTokenizer
.
AttributeSource
.
AttributeSource.AttributeFactory
bit
to zero.
AttributeImpl.clear()
on each Attribute implementation.
AttributeImpl
instances returned in a new
AttributeSource
instance.
DocMaker
.
CharSequence
.
KeywordTokenizer
with CollationKeyFilter
.CollationKey
, and then
encodes the CollationKey with IndexableBinaryStringTools
, to allow
it to be stored as an index term.Collector.collect(int)
on the decorated Collector
unless the allowed time has passed, in which case it throws an exception.
SegTokenFilter
i
and j
of you data.
RegexTermEnum
allowing
implementations to cache a compiled version of the regular
expression pattern.
NoMergePolicy
which indicates the index uses compound
files.
CompoundWordTokenFilterBase.CompoundWordTokenFilterBase(Version, TokenStream, String[], int, int, int, boolean)
instead
CompoundWordTokenFilterBase.CompoundWordTokenFilterBase(Version, TokenStream, String[], boolean)
instead
CompoundWordTokenFilterBase.CompoundWordTokenFilterBase(Version, TokenStream, Set, boolean)
instead
CompoundWordTokenFilterBase.CompoundWordTokenFilterBase(Version, TokenStream, String[])
instead
CompoundWordTokenFilterBase.CompoundWordTokenFilterBase(Version, TokenStream, Set)
instead
CompoundWordTokenFilterBase.CompoundWordTokenFilterBase(Version, TokenStream, Set, int, int, int, boolean)
instead
1/sqrt( steepness * (abs(x-min) + abs(x-max) - (max-min)) + 1 )
.
state.getBoost() *
lengthNorm(fieldName, numTokens)
where
numTokens does not count overlap tokens if
discountOverlaps is true by default or true for this
specific field.
state.getBoost()*lengthNorm(numTerms)
, where
numTerms
is FieldInvertState.getLength()
if DefaultSimilarity.setDiscountOverlaps(boolean)
is false, else it's FieldInvertState.getLength()
- FieldInvertState.getNumOverlap()
.
FieldInvertState
).
MergeScheduler
that runs each merge using a
separate thread.MultiTermQuery.ConstantScoreAutoRewrite
, with ConstantScoreAutoRewrite.setTermCountCutoff(int)
set to
ConstantScoreAutoRewrite.DEFAULT_TERM_COUNT_CUTOFF
and ConstantScoreAutoRewrite.setDocCountPercent(double)
set to
ConstantScoreAutoRewrite.DEFAULT_DOC_COUNT_PERCENT
.
MultiTermQuery.SCORING_BOOLEAN_QUERY_REWRITE
except
scores are not computed.
ScoringRewrite.SCORING_BOOLEAN_QUERY_REWRITE
except
scores are not computed.
TeeSinkTokenFilter
passes all tokens to the added sinks
when itself is consumed.
ContentSource
.len
chars of text
starting at off
are in the set
CharSequence
is in the set
len
chars of text
starting at off
are in the CharArrayMap.keySet
CharSequence
is in the CharArrayMap.keySet
QueryNode.containsTag(String)
instead
QueryNodeImpl.containsTag(String)
instead
this
overlap / maxOverlap
.
CharArrayMap
.
CharArraySet.copy(Version, Set)
instead.
CharArraySet
.
Directory
to under the new
file name dest.
Directory.copy(Directory, String, String)
for every file that
needs copying. You can use the following code:
IndexFileNameFilter filter = IndexFileNameFilter.getFilter(); for (String file : src.listAll()) { if (filter.accept(null, file)) { src.copy(dest, file, file); } }
numBytes
bytes to the given IndexOutput
.
AttributeSource
to the given target AttributeSource
.
CachingCollector
which does not wrap another collector.
CachingCollector
that wraps the given collector and
caches documents and scores up to the specified RAM threshold.
TopFieldCollector
from the given
arguments.
TopScoreDocCollector
given the number of hits to
collect and whether documents are scored in order by the input
Scorer
to TopScoreDocCollector.setScorer(Scorer)
.
AttributeImpl
for the supplied Attribute
interface class.
ReusableAnalyzerBase.TokenStreamComponents
used to tokenize all the text in the provided Reader
.
ReusableAnalyzerBase.TokenStreamComponents
which tokenizes all the text in the provided Reader
.
ReusableAnalyzerBase.TokenStreamComponents
used to tokenize all the text in the provided Reader
.
ReusableAnalyzerBase.TokenStreamComponents
which tokenizes all the text in the provided Reader
.
ReusableAnalyzerBase.TokenStreamComponents
used to tokenize all the text in the provided Reader
.
ReusableAnalyzerBase.TokenStreamComponents
used to tokenize all the text in the provided Reader
.
ReusableAnalyzerBase.TokenStreamComponents
which tokenizes all the text in the provided Reader
.
ReusableAnalyzerBase.TokenStreamComponents
used to tokenize all the text in the provided Reader
.
ReusableAnalyzerBase.TokenStreamComponents
used to tokenize all the text in the provided Reader
.
ReusableAnalyzerBase.TokenStreamComponents
which tokenizes all the text in the provided Reader
.
ReusableAnalyzerBase.TokenStreamComponents
which tokenizes all the text in the provided Reader
.
ReusableAnalyzerBase.TokenStreamComponents
which tokenizes all the text in the provided Reader
.
ReusableAnalyzerBase.TokenStreamComponents
used to tokenize all the text in the provided Reader
.
ReusableAnalyzerBase.TokenStreamComponents
which tokenizes all the text in the provided Reader
.
ReusableAnalyzerBase.TokenStreamComponents
used to tokenize all the text in the provided Reader
.
ReusableAnalyzerBase.TokenStreamComponents
which tokenizes all the text in the provided Reader
.
ReusableAnalyzerBase.TokenStreamComponents
used to tokenize all the text in the provided Reader
.
ReusableAnalyzerBase.TokenStreamComponents
which tokenizes all the text in the provided Reader
.
ReusableAnalyzerBase.TokenStreamComponents
which tokenizes all the text in the provided Reader
.
ReusableAnalyzerBase.TokenStreamComponents
used to tokenize all the text in the provided Reader
.
ReusableAnalyzerBase.TokenStreamComponents
which tokenizes all the text in the provided Reader
.
ReusableAnalyzerBase.TokenStreamComponents
which tokenizes all the text in the provided Reader
.
TokenStream
which tokenizes all the
text in the provided Reader
.
ReusableAnalyzerBase.TokenStreamComponents
which tokenizes all the text in the provided Reader
.
ReusableAnalyzerBase.TokenStreamComponents
which tokenizes all the text in the provided Reader
.
ReusableAnalyzerBase.TokenStreamComponents
which tokenizes all the text in the provided Reader
.
ReusableAnalyzerBase.TokenStreamComponents
instance for this analyzer.
ReusableAnalyzerBase.TokenStreamComponents
which tokenizes all the text in the provided Reader
.
ReusableAnalyzerBase.TokenStreamComponents
used to tokenize all the text in the provided Reader
.
ReusableAnalyzerBase.TokenStreamComponents
used to tokenize all the text in the provided Reader
.
ReusableAnalyzerBase.TokenStreamComponents
which tokenizes all the text in the provided Reader
.
ReusableAnalyzerBase.TokenStreamComponents
used to tokenize all the text in the provided Reader
.
ReusableAnalyzerBase.TokenStreamComponents
which tokenizes all the text in the provided Reader
.
query
query
ValueSourceQuery
scores.
CustomScoreQuery.getCustomScoreProvider(org.apache.lucene.index.IndexReader)
, if you want
to modify the custom score calculation of a CustomScoreQuery
.IndexReader
.
ValueSourceQuery
.
ValueSourceQuery
.
CzechAnalyzer.getDefaultStopSet()
instead
Analyzer
for Czech language.CzechAnalyzer.getDefaultStopSet()
).
CzechStemFilter
.
CzechAnalyzer.CzechAnalyzer(Version, Set)
instead
CzechAnalyzer.CzechAnalyzer(Version, Set)
instead
CzechAnalyzer.CzechAnalyzer(Version, Set)
instead
TokenFilter
that applies CzechStemmer
to stem Czech words.Analyzer
for Danish.DanishAnalyzer.DEFAULT_STOPWORD_FILE
.
DateTools
or
NumericField
instead.
This class is included for use with existing
indices and will be removed in a future release (possibly Lucene 4.0).CharTermAttributeImpl.termBuffer()
as a Date using a DateFormat
.DateFormat.getDateInstance()
as the DateFormat
object.
ParametricRangeQueryNodeProcessor
processor
and must be defined in the QueryConfigHandler
.ParametricRangeQueryNodeProcessor
processor
and must be defined in the QueryConfigHandler
.IndexableBinaryStringTools.decode(char[], int, int, byte[], int, int)
instead. This method will be removed in Lucene 4.0
IndexableBinaryStringTools.decode(char[], int, int, byte[], int, int)
instead. This method will be removed in Lucene 4.0
PayloadHelper.encodeFloat(float)
.
Similarity.decodeNormValue(byte)
instead.
SpellChecker.setAccuracy(float)
.
AttributeImpl
s using the
class name of the supplied Attribute
interface class by appending Impl
to it.
Byte.toString(byte)
Double.toString(double)
Float.toString(float)
TimeLimitingCollector.isGreedy()
.
Integer.toString(int)
Long.toString(long)
IndexWriterConfig.DEFAULT_MAX_BUFFERED_DELETE_TERMS
instead
IndexWriterConfig.DEFAULT_MAX_BUFFERED_DOCS
instead.
IndexWriterConfig
IndexWriterConfig.DEFAULT_RAM_BUFFER_SIZE_MB
instead.
IndexWriterConfig.setReaderPooling(boolean)
.
Short.toString(short)
IndexWriterConfig.DEFAULT_TERM_INDEX_INTERVAL
instead.
Encoder
implementation that does not modify the outputICUTokenizerConfig
that is generally applicable
to many languages.GroupQueryNodeProcessor
processor and must
be defined in the QueryConfigHandler
.GroupQueryNodeProcessor
processor and must
be defined in the QueryConfigHandler
.PhraseSlopQueryNodeProcessor
processor and
must be defined in the QueryConfigHandler
.PhraseSlopQueryNodeProcessor
processor and
must be defined in the QueryConfigHandler
.DefaultPhraseSlopAttribute
is defined in the QueryConfigHandler
.docNum
.
term
indexed.
term
.
DeletedQueryNode
represents a node that was deleted from the query
node tree.TokenFilter
that decomposes compound words found in many Germanic languages.DictionaryCompoundWordTokenFilter.DictionaryCompoundWordTokenFilter(Version, TokenStream, String[], int, int, int, boolean)
instead
DictionaryCompoundWordTokenFilter.DictionaryCompoundWordTokenFilter(Version, TokenStream, String[])
instead
DictionaryCompoundWordTokenFilter.DictionaryCompoundWordTokenFilter(Version, TokenStream, Set)
instead
DictionaryCompoundWordTokenFilter.DictionaryCompoundWordTokenFilter(Version, TokenStream, Set, int, int, int, boolean)
instead
DictionaryCompoundWordTokenFilter
DictionaryCompoundWordTokenFilter
DictionaryCompoundWordTokenFilter
DictionaryCompoundWordTokenFilter
ContentSource
using the Dir collection for its input.Directory
implementation that uses the
Linux-specific O_DIRECT flag to bypass all OS level
caching.IndexWriterConfig.DISABLE_AUTO_FLUSH
instead
LLRect
. This class will be removed in a future release.i
.
Document
at the n
th position.
t
.
Searchable
's docFreq() in its own thread and waits for each search to complete and merge
the results back together.
term
.
term
.
DocIdSetIterator.NO_MORE_DOCS
if DocIdSetIterator.nextDoc()
or
DocIdSetIterator.advance(int)
were not called yet.
Document
objects.n
th
Document
in this index.
Document
at the n
th position.
Document
at the n
th
position.
n
th
Document
in this index.
docNum
.
IndexWriter.merge(org.apache.lucene.index.MergePolicy.OneMerge)
double
value to a sortable signed long
.
ReentrantLock
to disable lockingDutchAnalyzer.getDefaultStopSet()
instead
Analyzer
for Dutch language.DutchAnalyzer.getDefaultStopSet()
)
and a few default entries for the stem exclusion table.
DutchAnalyzer.DutchAnalyzer(Version, Set)
instead
DutchAnalyzer.DutchAnalyzer(Version, Set)
instead
DutchAnalyzer.DutchAnalyzer(Version, Set)
instead
SnowballFilter
with
DutchStemmer
instead, which has the
same functionality. This filter will be removed in Lucene 5.0KeywordAttribute
with KeywordMarkerFilter
instead.
KeywordAttribute
with KeywordMarkerFilter
instead.
DutchStemmer
instead,
which has the same functionality. This filter will be removed in Lucene 5.0TokenStream
.ElisionFilter.ElisionFilter(Version, TokenStream)
instead
ElisionFilter.ElisionFilter(Version, TokenStream, Set)
instead
ElisionFilter.ElisionFilter(Version, TokenStream, Set)
instead
DocIdSet
instance for easy use, e.g.
TermPositionVector
that stores only position information.
Payload
IndexableBinaryStringTools.encode(byte[], int, int, char[], int, int)
instead. This method will be removed in Lucene 4.0
IndexableBinaryStringTools.encode(byte[], int, int, char[], int, int)
instead. This method will be removed in Lucene 4.0
Similarity.encodeNormValue(float)
instead.
end()
on the
input TokenStream.
NOTE: Be sure to call super.end()
first when overriding this method.
TokenStream.incrementToken()
returned false
(using the new TokenStream
API).
Analyzer
for English.EnglishAnalyzer.getDefaultStopSet()
.
TokenFilter
that applies EnglishMinimalStemmer
to stem
English words.AlreadyClosedException
if this IndexWriter has been
closed.
ContentSource
which reads the English Wikipedia dump.o
is equal to this.
o
is equal to this.
o
is equal to this.
o
is equal to this.
ValueSourceQuery.equals(Object)
.
o
is equal to this.
o
is equal to this.
o
is equal to this.
o
is equal to this.
o
is equal to this.
o
is equal to this.
o
is equal to this.
o
is equal to this.
o
is equal to this.
o
is equal to this.
o
is equal to this.
\
.
\
.
\
.
EscapeQuerySyntax
to allow the QueryNode
to escape the queries, when the toQueryString method is called.doc
scored against
query
.
doc
scored against
weight
.
doc
scored against
weight
.
doc
scored against
query
.
Scorer
,
but it is needed by SpanWeight to build an explanation.
IndexWriter.expungeDeletes()
, except you can
specify whether the call should block until the
operation completes.
ExtendableQueryParser
enables arbitrary query parser extension
based on a customizable field naming scheme.ExtendableQueryParser
instance
ExtendableQueryParser
instance
ExtensionQuery
holds all query components extracted from the original
query string like the query field and the extension query string.ExtensionQuery
Extensions
class represents an extension mapping to associate
ParserExtension
instances with extension keys.Extensions
instance with the
Extensions.DEFAULT_EXTENSION_FIELD_DELIMITER
as a delimiter character.
Extensions
instance
buf
the text of interest within specified tags
instead
instead
Field
.FieldableNode
interface to indicate that its
children and itself are associated to a specific field.MultiFieldQueryNodeProcessor
processor and
it should be defined in a FieldConfig
.BoostAttribute
to the equivalent FieldConfig
based on a
defined map: fieldName -> boostValue store in FieldBoostMapAttribute
in the FieldBoostMapAttribute
.FieldCache
).FieldCache
.Filter
that only accepts documents whose single
term value in the specified field is contained in the
provided set of allowed terms.TopFieldCollector
.FieldCache.getBytes(org.apache.lucene.index.IndexReader, java.lang.String)
and sorts by ascending valueFieldCache.getDoubles(org.apache.lucene.index.IndexReader, java.lang.String)
and sorts by ascending valueFieldCache.getFloats(org.apache.lucene.index.IndexReader, java.lang.String)
and sorts by ascending valueFieldCache.getInts(org.apache.lucene.index.IndexReader, java.lang.String)
and sorts by ascending valueFieldCache.getLongs(org.apache.lucene.index.IndexReader, java.lang.String)
and sorts by ascending valueFieldCache.getShorts(org.apache.lucene.index.IndexReader, java.lang.String)
and sorts by ascending valueFieldComparator
for custom field sorting.FieldConfig.FieldConfig(String)
instead
FieldConfig
DateResolutionAttribute
to the equivalent FieldConfig
based
on a defined map: fieldName -> DateTools.Resolution stored in
FieldDateResolutionMapAttribute
in the
DateResolutionAttribute
.SpanQuery
objects participate in composite
single-field SpanQueries by 'lying' about their search field.FieldQueryNode
represents a element that contains field/text tupleTermQuery
object from a FieldQueryNode
object.null
as its
detail message.
FileFilter
, the FieldSelector allows one to make decisions about
what Fields get loaded on a Document
by IndexReader.document(int,org.apache.lucene.document.FieldSelector)
TermVectorEntry
s
This is not thread-safe.FieldTermStack
is a stack that keeps query terms in the specified field
of the document to be highlighted.CharacterUtils.CharacterBuffer
with characters read from the given
reader Reader
.
SegToken
FilterIndexReader
contains another IndexReader, which it
uses as its basic source of data, possibly transforming the data along the
way or providing additional functionality.TermDocs
implementations.TermEnum
implementations.TermPositions
implementations.CachingWrapperFilter
if you wish to cache
Filter
s.MergePolicy.MergeSpecification
if so.
ChecksumIndexOutput.prepareCommit()
Analyzer
for Finnish.FinnishAnalyzer.DEFAULT_STOPWORD_FILE
.
TokenFilter
that applies FinnishLightStemmer
to stem Finnish
words.CheckIndex.checkIndex()
.
Tokenizer
chain,
eg from one TokenFilter to another one.Payload
.FieldCache
using getFloats()
and makes those values
available as other numeric types, casting as needed.float
value to a sortable signed int
.
numBytes
.
Highlighter
class.FragmentsBuilder
is an interface for fragments (snippets) builder classes.FrenchAnalyzer.getDefaultStopSet()
instead
Analyzer
for French language.FrenchAnalyzer.getDefaultStopSet()
).
FrenchAnalyzer.FrenchAnalyzer(Version, Set)
instead
FrenchAnalyzer.FrenchAnalyzer(Version, Set)
instead
TokenFilter
that applies FrenchLightStemmer
to stem French
words.TokenFilter
that applies FrenchMinimalStemmer
to stem French
words.SnowballFilter
with
FrenchStemmer
instead, which has the
same functionality. This filter will be removed in Lucene 5.0KeywordAttribute
with KeywordMarkerFilter
instead.
FrenchStemmer
instead,
which has the same functionality. This filter will be removed in Lucene 5.0PhraseSlopQueryNodeProcessor
processor and
must be defined in the QueryConfigHandler
.PhraseSlopQueryNodeProcessor
processor and
must be defined in the QueryConfigHandler
.minimumSimilarity
to term
.
FuzzyQuery(term, minimumSimilarity, prefixLength, Integer.MAX_VALUE)
.
FuzzyQuery(term, minimumSimilarity, 0, Integer.MAX_VALUE)
.
FuzzyQuery(term, 0.5f, 0, Integer.MAX_VALUE)
.