|
||||||||||
| PREV NEXT | FRAMES NO FRAMES | |||||||||
FilteringTokenFilter.incrementToken().
AttributeSource shall be stored
in the sink.
SpanQuery.
true if this collector does not
require the matching docIDs to be delivered in int sort
order (smallest to largest) to Collector.collect(int).
FilteredTermEnum.setEnum(org.apache.lucene.index.TermEnum)
IndexWriter.getAnalyzer().
CompoundFileWriter.addFile(String), only for files that are found in an
external Directory.
IndexWriter.addIndexes(Directory...) instead
IndexReader.ReaderFinishedListener.
TeeSinkTokenFilter.SinkTokenStream created by another TeeSinkTokenFilter
to this one.
TermVectorEntry.
String to this character sequence.
StringBuilder to this character sequence.
CharTermAttribute to this character sequence.
List view.
Set view.
AttributeSource.AttributeSource or AttributeImpl.AttributeImpls,
and methods to add and get them.AttributeSource.AttributeFactory.DEFAULT_ATTRIBUTE_FACTORY.
AttributeSource.AttributeFactory for creating new Attribute instances.
AttributeImpls.CharFilter.n bits.
name in Directory
d, as written by the BitVector.write(org.apache.lucene.store.Directory, java.lang.String) method.
BooleanQuery.getMaxClauseCount() clauses.char[] buffer size)
for encoding int values.
char[] buffer size)
for encoding long values.
IndexInput.IndexOutput.FieldCache
using getBytes() and makes those values
available as other numeric types, casting as needed.FieldCacheSource, already knowing that cache and field are equal.
FieldCacheSource, without the hash-codes of the field
and the cache (those are taken care of elsewhere).
CachingWrapperFilter.DeletesMode.RECACHE.
CharacterUtils provides a unified interface to Character-related
operations to implement backwards compatible character operations based on a
Version instance.CharacterUtils.fill(CharacterBuffer, Reader).CharArraySet.CharArraySet(Version, int, boolean) instead
CharArraySet.CharArraySet(Version, Collection, boolean) instead
char[] instances.CharStream.correctOffset(int)
functionality over Reader.CharTokenizer instance
CharTokenizer instance
CharTokenizer instance
CharTokenizer.CharTokenizer(Version, Reader) instead. This will be
removed in Lucene 4.0.
CharTokenizer.CharTokenizer(Version, AttributeSource, Reader) instead. This will be
removed in Lucene 4.0.
CharTokenizer.CharTokenizer(Version, AttributeSource.AttributeFactory, Reader) instead. This will be
removed in Lucene 4.0.
CheckIndex.Status instance detailing
the state of the index.
CheckIndex.Status instance detailing
the state of the index.
CheckIndex.checkIndex() detailing the health and status of the index.IllegalStateException if it is.
ClassicTokenizer with ClassicFilter, LowerCaseFilter and StopFilter, using a list of
English stop words.ClassicAnalyzer.STOP_WORDS_SET).
ClassicTokenizer.ClassicTokenizer.
AttributeSource.
AttributeSource.AttributeFactory
bit to zero.
AttributeImpl.clear() on each Attribute implementation.
AttributeImpl instances returned in a new
AttributeSource instance.
CharSequence.
KeywordTokenizer with CollationKeyFilter.CollationKey, and then
encodes the CollationKey with IndexableBinaryStringTools, to allow
it to be stored as an index term.Collector.collect(int) on the decorated Collector
unless the allowed time has passed, in which case it throws an exception.
i and j of you data.
NoMergePolicy which indicates the index uses compound
files.
state.getBoost()*lengthNorm(numTerms), where
numTerms is FieldInvertState.getLength() if DefaultSimilarity.setDiscountOverlaps(boolean) is false, else it's FieldInvertState.getLength() - FieldInvertState.getNumOverlap().
FieldInvertState).
MergeScheduler that runs each merge using a
separate thread.MultiTermQuery.ConstantScoreAutoRewrite, with ConstantScoreAutoRewrite.setTermCountCutoff(int) set to
ConstantScoreAutoRewrite.DEFAULT_TERM_COUNT_CUTOFF
and ConstantScoreAutoRewrite.setDocCountPercent(double) set to
ConstantScoreAutoRewrite.DEFAULT_DOC_COUNT_PERCENT.
MultiTermQuery.SCORING_BOOLEAN_QUERY_REWRITE except
scores are not computed.
ScoringRewrite.SCORING_BOOLEAN_QUERY_REWRITE except
scores are not computed.
TeeSinkTokenFilter passes all tokens to the added sinks
when itself is consumed.
len chars of text starting at off
are in the set
CharSequence is in the set
len chars of text starting at off
are in the CharArrayMap.keySet
CharSequence is in the CharArrayMap.keySet
overlap / maxOverlap.
CharArrayMap.
CharArraySet.copy(Version, Set) instead.
CharArraySet.
Directory to under the new
file name dest.
Directory.copy(Directory, String, String) for every file that
needs copying. You can use the following code:
IndexFileNameFilter filter = IndexFileNameFilter.getFilter();
for (String file : src.listAll()) {
if (filter.accept(null, file)) {
src.copy(dest, file, file);
}
}
numBytes bytes to the given IndexOutput.
AttributeSource to the given target AttributeSource.
CachingCollector which does not wrap another collector.
CachingCollector that wraps the given collector and
caches documents and scores up to the specified RAM threshold.
TopFieldCollector from the given
arguments.
TopScoreDocCollector given the number of hits to
collect and whether documents are scored in order by the input
Scorer to TopScoreDocCollector.setScorer(Scorer).
AttributeImpl for the supplied Attribute interface class.
ReusableAnalyzerBase.TokenStreamComponents instance for this analyzer.
ReusableAnalyzerBase.TokenStreamComponents
used to tokenize all the text in the provided Reader.
query
query
ValueSourceQuery scores.
CustomScoreQuery.getCustomScoreProvider(org.apache.lucene.index.IndexReader), if you want
to modify the custom score calculation of a CustomScoreQuery.IndexReader.
ValueSourceQuery.
ValueSourceQuery.
DateTools or
NumericField instead.
This class is included for use with existing
indices and will be removed in a future release (possibly Lucene 4.0).IndexableBinaryStringTools.decode(char[], int, int, byte[], int, int)
instead. This method will be removed in Lucene 4.0
IndexableBinaryStringTools.decode(char[], int, int, byte[], int, int)
instead. This method will be removed in Lucene 4.0
Similarity.decodeNormValue(byte) instead.
AttributeImpls using the
class name of the supplied Attribute interface class by appending Impl to it.
Byte.toString(byte)
Double.toString(double)
Float.toString(float)
TimeLimitingCollector.isGreedy().
Integer.toString(int)
Long.toString(long)
IndexWriterConfig.DEFAULT_MAX_BUFFERED_DELETE_TERMS instead
IndexWriterConfig.DEFAULT_MAX_BUFFERED_DOCS instead.
IndexWriterConfig
IndexWriterConfig.DEFAULT_RAM_BUFFER_SIZE_MB instead.
IndexWriterConfig.setReaderPooling(boolean).
Short.toString(short)
IndexWriterConfig.DEFAULT_TERM_INDEX_INTERVAL instead.
docNum.
term indexed.
term.
IndexWriterConfig.DISABLE_AUTO_FLUSH instead
i.
Document at the nth position.
t.
Searchable's docFreq() in its own thread and waits for each search to complete and merge
the results back together.
term.
term.
DocIdSetIterator.NO_MORE_DOCS if DocIdSetIterator.nextDoc() or
DocIdSetIterator.advance(int) were not called yet.
nth
Document in this index.
Document at the n
th position.
docNum.
IndexWriter.merge(org.apache.lucene.index.MergePolicy.OneMerge)
double value to a sortable signed long.
ReentrantLock to disable lockingDocIdSet instance for easy use, e.g.
TermPositionVector that stores only position information.
IndexableBinaryStringTools.encode(byte[], int, int, char[], int, int)
instead. This method will be removed in Lucene 4.0
IndexableBinaryStringTools.encode(byte[], int, int, char[], int, int)
instead. This method will be removed in Lucene 4.0
Similarity.encodeNormValue(float) instead.
end() on the
input TokenStream.
NOTE: Be sure to call super.end() first when overriding this method.
TokenStream.incrementToken() returned false
(using the new TokenStream API).
AlreadyClosedException if this IndexWriter has been
closed.
o is equal to this.
o is equal to this.
o is equal to this.
o is equal to this.
ValueSourceQuery.equals(Object).
o is equal to this.
o is equal to this.
o is equal to this.
o is equal to this.
o is equal to this.
o is equal to this.
o is equal to this.
o is equal to this.
\.
doc scored against
query.
doc scored against
weight.
doc scored against
weight.
doc scored against
query.
Scorer,
but it is needed by SpanWeight to build an explanation.
IndexWriter.expungeDeletes(), except you can
specify whether the call should block until the
operation completes.
instead
instead
Field.FieldCache).FieldCache.Filter that only accepts documents whose single
term value in the specified field is contained in the
provided set of allowed terms.TopFieldCollector.FieldCache.getBytes(org.apache.lucene.index.IndexReader, java.lang.String) and sorts by ascending valueFieldCache.getDoubles(org.apache.lucene.index.IndexReader, java.lang.String) and sorts by ascending valueFieldCache.getFloats(org.apache.lucene.index.IndexReader, java.lang.String) and sorts by ascending valueFieldCache.getInts(org.apache.lucene.index.IndexReader, java.lang.String) and sorts by ascending valueFieldCache.getLongs(org.apache.lucene.index.IndexReader, java.lang.String) and sorts by ascending valueFieldCache.getShorts(org.apache.lucene.index.IndexReader, java.lang.String) and sorts by ascending valueFieldComparator for custom field sorting.SpanQuery objects participate in composite
single-field SpanQueries by 'lying' about their search field.null as its
detail message.
FileFilter, the FieldSelector allows one to make decisions about
what Fields get loaded on a Document by IndexReader.document(int,org.apache.lucene.document.FieldSelector)TermVectorEntrys
This is not thread-safe.CharacterUtils.CharacterBuffer with characters read from the given
reader Reader.
FilterIndexReader contains another IndexReader, which it
uses as its basic source of data, possibly transforming the data along the
way or providing additional functionality.TermDocs implementations.TermEnum implementations.TermPositions implementations.CachingWrapperFilter if you wish to cache
Filters.MergePolicy.MergeSpecification if so.
ChecksumIndexOutput.prepareCommit()
CheckIndex.checkIndex().
Tokenizer chain,
eg from one TokenFilter to another one.FieldCache
using getFloats() and makes those values
available as other numeric types, casting as needed.float value to a sortable signed int.
numBytes.
minimumSimilarity to term.
FuzzyQuery(term, minimumSimilarity, prefixLength, Integer.MAX_VALUE).
FuzzyQuery(term, minimumSimilarity, 0, Integer.MAX_VALUE).
FuzzyQuery(term, 0.5f, 0, Integer.MAX_VALUE).
reader which share a prefix of
length prefixLength with term and which have a fuzzy similarity >
minSimilarity.
len chars of text
starting at off
CharSequence
true if bit is one and
false if it is zero.
SetOnce.set(Object).
bit to true, and
returns true if bit was already set
Float.NaN if this
DocValues instance does not contain any value.
null for numeric fields
Document.setBoost(float).
field as a single byte and returns an array
of size reader.maxDoc() of the value each document
has in the given field.
field as bytes and returns an array of
size reader.maxDoc() of the value each document has in the
given field.
IndexWriter.commit(Map), from current index
segments file.
FieldComparator to use for
sorting.
FieldComparatorSource used for
custom sorting
IndexWriterConfig, cloned
from the IndexWriterConfig passed to
IndexWriter.IndexWriter(Directory, IndexWriterConfig).
CustomScoreProvider that calculates the custom scores
for the given IndexReader.
null if not yet set.
IndexableBinaryStringTools.getDecodedLength(char[], int, int) instead. This
method will be removed in Lucene 4.0
IndexWriterConfig.getDefaultWriteLockTimeout() instead
Directory for the index.
Directory of the index that hit
the exception.
PayloadProcessorProvider.DirPayloadProcessor for the given Directory,
through which PayloadProcessorProvider.DirPayloadProcessors can be obtained for each
Term, or null if none should be used.
DocIdSet enumerating the documents that should be
permitted in search results.
field as integers and returns an array
of size reader.maxDoc() of the value each document
has in the given field.
field as doubles and returns an array of
size reader.maxDoc() of the value each document has in the
given field.
StopFilter.StopFilter(Version, TokenStream, Set) instead
IndexableBinaryStringTools.getEncodedLength(byte[], int, int) instead. This
method will be removed in Lucene 4.0
Document.getFieldable(java.lang.String) instead and cast depending on
data type.
Fieldable name.
Fieldables with the given name.
QueryParser.getFieldQuery(String,String,boolean) instead.
QueryParser.getFieldQuery(String,String,boolean).
Document.getFieldable(java.lang.String) instead and cast depending on
data type.
FSDirectory.getDirectory() instead.
null if a query is wrapped.
field as floats and returns an array
of size reader.maxDoc() of the value each document
has in the given field.
field as floats and returns an array
of size reader.maxDoc() of the value each document
has in the given field.
QueryParser.getWildcardQuery(java.lang.String, java.lang.String)).
baseClass in which this method is overridden/implemented
in the inheritance path between baseClass and the given subclass subclazz.
IndexCommit as specified in
IndexWriterConfig.setIndexCommit(IndexCommit) or the default, null
which specifies to open the latest index commit point.
IndexDeletionPolicy specified in
IndexWriterConfig.setIndexDeletionPolicy(IndexDeletionPolicy) or the default
KeepOnlyLastCommitDeletionPolicy/
IndexReader this searches.
FieldCache.setInfoStream(PrintStream)
CharacterUtils implementation according to the given
Version instance.
field as integers and returns an array
of size reader.maxDoc() of the value each document
has in the given field.
field as integers and returns an array of
size reader.maxDoc() of the value each document has in the
given field.
CharacterUtils.CharacterBuffer.getOffset()
field as longs and returns an array
of size reader.maxDoc() of the value each document
has in the given field.
field as longs and returns an array of
size reader.maxDoc() of the value each document has in the
given field.
IndexWriterConfig.getMaxBufferedDeleteTerms() instead
IndexWriterConfig.getMaxBufferedDocs() instead.
LimitTokenCountAnalyzer to limit number of tokens.
ConcurrentMergeScheduler.setMaxMergeCount(int).
LogMergePolicy.getMaxMergeDocs() directly.
Float.NaN if this
DocValues instance does not contain any value.
IndexWriterConfig.getMergedSegmentWarmer() instead.
LogMergePolicy.getMergeFactor() directly.
IndexWriterConfig.getMergePolicy() instead
IndexWriterConfig.getMergeScheduler() instead
MergeScheduler that was set by
IndexWriterConfig.setMergeScheduler(MergeScheduler)
MergePolicy to avoid
selecting merges for segments already being merged.
Float.NaN if this
DocValues instance does not contain any value.
MergeScheduler calls this method
to retrieve the next merge requested by the
MergePolicy
Number, null if not yet initialized.
positionIncrement == 0.
Analyzer.getPositionIncrementGap(java.lang.String), except for
Token offsets instead.
PositionBasedTermVectorMapper.TVPositionInfo.getTerms()) of TermVectorOffsetInfo objects.
IndexWriterConfig.OpenMode set by IndexWriterConfig.setOpenMode(OpenMode).
null for T is String}
FieldCache parser that fits to the given sort type.
PayloadProcessorProvider that is used during segment
merges to process payloads.
QueryParser.getWildcardQuery(java.lang.String, java.lang.String)).
PayloadProcessorProvider.DirPayloadProcessor for the given term.
null if a filter is wrapped.
IndexWriterConfig.getRAMBufferSizeMB() instead.
IndexWriterConfig.setRAMBufferSizeMB(double) if enabled.
IndexInput.readBytes(byte[], int, int).
IndexReader.open(IndexWriter,boolean) instead.
IndexReader.open(IndexWriter,boolean) instead. Furthermore,
this method cannot guarantee the reader (and its
sub-readers) will be opened with the
termInfosIndexDivisor setting because some of them may
have already been opened according to IndexWriterConfig.setReaderTermsIndexDivisor(int). You
should set the requested termInfosIndexDivisor through
IndexWriterConfig.setReaderTermsIndexDivisor(int) and use
IndexWriter.getReader().
IndexWriter.getReader() has not been called.
IndexWriterConfig.getReaderTermsIndexDivisor() instead.
Searchables this searches.
segments_N) associated
with this commit point.
PriorityQueue.initialize(int) to fill the queue, so
that the code which uses that queue can always assume it's full and only
change the top without attempting to insert any new object.PriorityQueue.lessThan(T, T) should always favor the
non-sentinel values).field as shorts and returns an array
of size reader.maxDoc() of the value each document
has in the given field.
field as shorts and returns an array of
size reader.maxDoc() of the value each document has in the
given field.
IndexWriterConfig.getSimilarity() instead
Similarity implementation used by this
IndexWriter.
SnapshotDeletionPolicy.SnapshotDeletionPolicy(IndexDeletionPolicy, Map) in order to
initialize snapshots at construction.
Class.getResourceAsStream(String)) and adds all words as entries to
a Set.
field and returns
an array of them in natural order, along with an array telling
which element in the term array each document uses.
field and returns an array
of size reader.maxDoc() containing the value each document
has in the given field.
TermFreqVector.
IndexWriterConfig.getTermIndexInterval()
TokenStream
LogMergePolicy.getUseCompoundFile()
IndexWriter.commit(Map) for this commit.
true, if the unmap workaround is enabled.
Class.getResourceAsStream(String)) and adds every line as an entry
to a Set (omitting leading and trailing whitespace).
Class.getResourceAsStream(String)) and adds every line as an entry
to a Set (omitting leading and trailing whitespace).
IndexWriterConfig.getWriteLockTimeout()
ValueSourceQuery.hashCode().
o is equal to this.
log(numDocs/(docFreq+1)) + 1.
Similarity.idfExplain(Term,Searcher,int) by passing
searcher.docFreq(term) as the docFreq.
true if the lower endpoint is inclusive
true if the lower endpoint is inclusive
true if the lower endpoint is inclusive
true if the upper endpoint is inclusive
true if the upper endpoint is inclusive
true if the lower endpoint is inclusive
true if the lower endpoint is inclusive
true if the upper endpoint is inclusive
true if the upper endpoint is inclusive
true if the upper endpoint is inclusive
IndexWriter) use this method to advance the stream to
the next token.
.f + a number and
from .s + a number.
IndexDeletionPolicy or IndexReader.index commits.indexOf(int) but searches for a number of terms
at the same time.
true if an index exists at the specified directory.
matchesExtension), as well as generating file names from a segment name,
generation and extension (
fileNameFromGeneration,
segmentFileName).Directory.getTerms at which the term with the specified
term appears.
IndexReader.getFieldNames(FieldOption).IndexWriter using the given
matchVersion.
IndexWriter using the given
matchVersion.
IndexWriter using the given
config.
IndexWriter creates and maintains an index.IndexWriter.IndexWriter(Directory, IndexWriterConfig) instead
IndexWriter.IndexWriter(Directory, IndexWriterConfig) instead
IndexWriter.IndexWriter(Directory, IndexWriterConfig) instead
IndexWriter.IndexWriter(Directory, IndexWriterConfig) instead
IndexWriter.IndexWriter(Directory, IndexWriterConfig) instead
conf.
IndexWriter.getReader() has been called (ie, this writer
is in near real-time mode), then after a merge
completes, this class can be invoked to warm the
reader on the newly merged segment, before the merge
commits.LimitTokenCountAnalyzer instead.IndexWriter.Version as well as the default Analyzer.
IndexWriter:
IndexWriterConfig.OpenMode.CREATE - creates a new index or overwrites an existing one.Comparator.
Comparator.
List using the Comparator.
List in natural order.
NoMergeScheduler
Lock is stateless.
FieldCache
using getInts() and makes those values
available as other numeric types, casting as needed.shift bits.
CachingWrapperFilter, if this DocIdSet
should be cached without copying it into a BitSet.
Similarity.coord(int,int) is disabled in
scoring for this query instance.
IndexFileNames.STORE_INDEX_EXTENSIONS).
true iff the current token is a keyword, otherwise
false/
true iff the current token is a keyword, otherwise
false/
true iff the index in the named directory is
currently locked.
ASCIIFoldingFilter
which covers a superset of Latin 1.
This class is included for use with existing
indexes and will be removed in a future release (possibly Lucene 4.0).baseClass and the given subclass subclazz.
SEPARATE_NORMS_EXTENSION + "[0-9]+".
IndexReader.getTermFreqVector(int,String).
IndexReader.getTermFreqVector(int,String).
CharTokenizer.isTokenChar(int) instead. This method will be
removed in Lucene 4.0.
Character.isLetter(int).
Character.isWhitespace(int).
CharArraySet.CharArraySetIterator depending on the version used:
if matchVersion ≥ 3.1, it returns char[] instances in this set.
if matchVersion is 3.0 or older, it returns new
allocated Strings, so this method violates the Set interface.
Iterator of contained segments in order.
DocIdSetIterator to access the set.
IndexDeletionPolicy implementation that
keeps only the most recent commit and immediately removes
all prior commits after a new commit is done.CharArraySet view on the map's keys.
KeywordAttribute.KeywordAttribute.
KeywordAttribute.
LengthFilter.LengthFilter(boolean, TokenStream, int, int) instead.
fieldName matching
less than or equal to upperTerm.
AttributeSource.
AttributeSource.AttributeFactory.
LetterTokenizer.LetterTokenizer(Version, Reader) instead. This
will be removed in Lucene 4.0.
LetterTokenizer.LetterTokenizer(Version, AttributeSource, Reader) instead.
This will be removed in Lucene 4.0.
LetterTokenizer.LetterTokenizer(Version, AttributeSource.AttributeFactory, Reader)
instead. This will be removed in Lucene 4.0.
IndexWriter.DEFAULT_MAX_FIELD_LENGTH
Lock.obtain(long) to try
forever to obtain the lock.
Lock.obtain(long) waits, in milliseconds,
in between attempts to acquire the lock.
write.lock
could not be acquired.write.lock
could not be released.VerifyingLockFactory.LogMergePolicy that measures size of a
segment as the total byte size of the segment's files.LogMergePolicy that measures size of a
segment as the number of documents (not taking deletions
into account).MergePolicy that tries
to merge segments into levels of exponentially
increasing size, where each level has fewer segments than
the value of the merge factor.shift bits.
LowerCaseFilter.LowerCaseFilter(Version, TokenStream) instead.
AttributeSource.
AttributeSource.AttributeFactory.
LowerCaseTokenizer.LowerCaseTokenizer(Reader) instead. This will be
removed in Lucene 4.0.
LowerCaseTokenizer.LowerCaseTokenizer(AttributeSource, Reader)
instead. This will be removed in Lucene 4.0.
LowerCaseTokenizer.LowerCaseTokenizer(AttributeSource.AttributeFactory, Reader)
instead. This will be removed in Lucene 4.0.
SimpleAnalyzer.
Lock.
StopFilter.makeStopSet(Version, String...) instead
StopFilter.makeStopSet(Version, List) instead
StopFilter.makeStopSet(Version, String[], boolean) instead;
StopFilter.makeStopSet(Version, List, boolean) instead
map.
FieldSelector based on a Map of field names to FieldSelectorResultsCharFilter that applies the mappings
contained in a NormalizeCharMap to the character
stream, and correcting the resulting changes to the
offsets.CharStream.
Reader.
IndexWriter.getNextMerge().
IndexWriter uses an instance
implementing this interface to execute the merges
selected by a MergePolicy.Comparator.
Comparator.
List using the Comparator.
List in natural order.
SorterTemplate.insertionSort(int,int).
ConcurrentMergeScheduler.verbose() was
called and returned true.
Directory implementation that uses
mmap for reading, and FSDirectory.FSIndexOutput for writing.NativeFSLockFactory.
fieldName matching
greater than or equal to lowerTerm.
Collector which allows running a search with several
Collectors.MultiPhraseQuery.add(Term[]).TermPositions for multiple Terms as
a single TermPositions.MultipleTermPositions instance.
Query that matches documents
containing a subset of terms provided by a FilteredTermEnum enumeration.BooleanClause.Occur.SHOULD clause in a BooleanQuery, but the scores
are only computed as the boost.size terms.
BooleanClause.Occur.SHOULD clause in a BooleanQuery, and keeps the
scores as computed by the query.size terms.
MultiTermQuery, that exposes its
functionality as a Filter.MultiTermQuery as a Filter.
CustomScoreQuery.toString(String).
ThreadFactory implementation that accepts the name prefix
of the created threads as a constructor argument.NamedThreadFactory instance
LockFactory using native OS file
locks.NearSpansOrdered, but for the unordered case.FieldCache.getBytes(IndexReader,String).
FieldCache.getBytes(IndexReader,String,FieldCache.ByteParser).
CharacterUtils.CharacterBuffer and allocates a char[]
of the given bufferSize.
FieldCache.getDoubles(IndexReader,String).
FieldCache.getDoubles(IndexReader,String,FieldCache.DoubleParser).
NumericRangeFilter, that filters a double
range using the given precisionStep.
NumericRangeFilter, that queries a double
range using the default precisionStep NumericUtils.PRECISION_STEP_DEFAULT (4).
NumericRangeQuery, that queries a double
range using the given precisionStep.
NumericRangeQuery, that queries a double
range using the default precisionStep NumericUtils.PRECISION_STEP_DEFAULT (4).
FieldCache.getFloats(IndexReader,String).
FieldCache.getFloats(IndexReader,String,FieldCache.FloatParser).
NumericRangeFilter, that filters a float
range using the given precisionStep.
NumericRangeFilter, that queries a float
range using the default precisionStep NumericUtils.PRECISION_STEP_DEFAULT (4).
NumericRangeQuery, that queries a float
range using the given precisionStep.
NumericRangeQuery, that queries a float
range using the default precisionStep NumericUtils.PRECISION_STEP_DEFAULT (4).
FieldCache.getInts(IndexReader,String).
FieldCache.getInts(IndexReader,String,FieldCache.IntParser).
NumericRangeFilter, that filters a int
range using the given precisionStep.
NumericRangeFilter, that queries a int
range using the default precisionStep NumericUtils.PRECISION_STEP_DEFAULT (4).
NumericRangeQuery, that queries a int
range using the given precisionStep.
NumericRangeQuery, that queries a int
range using the default precisionStep NumericUtils.PRECISION_STEP_DEFAULT (4).
FieldCache.getLongs(IndexReader,String).
FieldCache.getLongs(IndexReader,String,FieldCache.LongParser).
NumericRangeFilter, that filters a long
range using the given precisionStep.
NumericRangeFilter, that queries a long
range using the default precisionStep NumericUtils.PRECISION_STEP_DEFAULT (4).
NumericRangeQuery, that queries a long
range using the given precisionStep.
NumericRangeQuery, that queries a long
range using the default precisionStep NumericUtils.PRECISION_STEP_DEFAULT (4).
RAMFile for storing data.
FieldCache.getShorts(IndexReader,String).
FieldCache.getShorts(IndexReader,String,FieldCache.ShortParser).
TeeSinkTokenFilter.SinkTokenStream that receives all tokens consumed by this stream.
TeeSinkTokenFilter.SinkTokenStream that receives all tokens consumed by this stream
that pass the supplied filter.
FieldCache.getStringIndex(org.apache.lucene.index.IndexReader, java.lang.String).
Thread
TopDocs instance containing the given results.
DocIdSetIterator.NO_MORE_DOCS if there are no more docs in the
set.FSDirectory implementation that uses java.nio's FileChannel's
positional read, which allows multiple threads to read from the same file
without synchronizing.NativeFSLockFactory.
NoMergePolicy which indicates the index does not use
compound files.
DocIdSetIterator.nextDoc(), DocIdSetIterator.advance(int) and
DocIdSetIterator.docID() it means there are no more docs in the iterator.
IndexDeletionPolicy which keeps all index commits around, never
deleting them.LockFactory to disable locking entirely.MergePolicy which never returns merges to execute (hence it's
name).MergeScheduler which never executes any merges.CharTokenizer.normalize(int) instead. This method will be
removed in Lucene 4.0.
Character.toLowerCase(int).
MappingCharFilter.NumericUtils instead, which
provides a sortable binary representation (prefix encoded) of numeric
values.
To index and efficiently query numeric values use NumericField
and NumericRangeQuery.
This class is included for use with existing
indices and will be removed in a future release (possibly Lucene 4.0).NumericUtils, e.g.
NumericUtils, e.g.
NumericUtils.intToPrefixCoded(int), e.g.
NumericUtils.longToPrefixCoded(long), e.g.
Field that enables indexing
of numeric values for efficient range filtering and
sorting.precisionStep
NumericUtils.PRECISION_STEP_DEFAULT (4).
precisionStep
NumericUtils.PRECISION_STEP_DEFAULT (4).
precisionStep.
precisionStep.
NumericField.Filter that only accepts numeric values within
a specified range.Query that matches numeric values within a
specified range.TokenStream
for indexing numeric values that can be used by NumericRangeQuery or NumericRangeFilter.precisionStep
NumericUtils.PRECISION_STEP_DEFAULT (4).
precisionStep.
precisionStep using the given AttributeSource.
precisionStep using the given
AttributeSource.AttributeFactory.
NumericUtils.splitIntRange(org.apache.lucene.util.NumericUtils.IntRangeBuilder, int, int, int).NumericUtils.splitLongRange(org.apache.lucene.util.NumericUtils.LongRangeBuilder, int, long, long).IndexWriter.
IndexCommit.
IndexDeletionPolicy.
IndexDeletionPolicy.
IndexDeletionPolicy.
IndexDeletionPolicy.
FSDirectory.open(File), but allows you to
also specify a custom LockFactory.
IndexWriter.optimize(), except you can specify
whether the call should block until the optimize
completes.
IndexWriter.optimize(int), except you can
specify whether the call should block until the
optimize completes.
Fieldcache using getStringIndex().org.apache.lucene.analysis.standard package contains three
fast grammar-based tokenizers constructed with JFlex:CollationKeyFilter
converts each token into its binary CollationKey using the
provided Collator, and then encode the CollationKey
as a String using
IndexableBinaryStringTools, to allow it to be
stored as an index term.Document for indexing and searching.IndexSearcher, instead.Searchable which searches searchables with the default
executor service (a cached thread pool).
Searchable which searches searchables with the specified ExecutorService.
Query.
CheckIndex.checkIndex(List)) was called with non-null
argument).
PayloadProcessorProvider.PayloadProcessor.processPayload(byte[], int, int).
SpanNearQuery except that it factors
in the value of the payloads located at each of the positions where the
TermSpans occurs.PayloadProcessorProvider.DirPayloadProcessor to be used for a Directory.PayloadProcessorProvider.DirPayloadProcessor for a given Term which allows
processing the payloads of different terms differently.SpanTermQuery except that it factors
in the value of the payload located at each of the positions where the
Term occurs.SnapshotDeletionPolicy which adds a persistence layer so that
snapshots can be maintained across the life of an application.PersistentSnapshotDeletionPolicy wraps another
IndexDeletionPolicy to enable flexible snapshotting.
TokenStream, used in phrase
searching.Collector implementation which wraps another
Collector and makes sure only documents with
scores > 0 are collected.NumericField, NumericTokenStream,
NumericRangeQuery, and NumericRangeFilter as default
prefix.
PayloadFunction to score the payloads, but
can be overridden to do other things.
1/sqrt(sumOfSquaredWeights).
query.
Comparator.
Comparator.
List using the Comparator.
List in natural order.
Directory implementation.Directory.
RAMDirectory instance from a different
Directory implementation.
IndexOutput implementation.asList().subList(first, last)
instead.
IndexReaders.null for numeric fields
Directory.
AttributeImpl/AttributeSource
passing the class name of the Attribute, a key and the actual value.
AttributeImpl.reflectWith(AttributeReflector) method:
iff prependAttClass=true: "AttributeClass#key=value,AttributeClass#key=value"
iff prependAttClass=false: "key=value,key=value"
AttributeSource.reflectWith(AttributeReflector) method:
iff prependAttClass=true: "AttributeClass#key=value,AttributeClass#key=value"
iff prependAttClass=false: "key=value,key=value"
AttributeReflector.
AttributeReflector.
Token.clear(),
CharTermAttributeImpl.copyBuffer(char[], int, int),
Token.setStartOffset(int),
Token.setEndOffset(int),
Token.setType(java.lang.String)
Token.clear(),
CharTermAttributeImpl.copyBuffer(char[], int, int),
Token.setStartOffset(int),
Token.setEndOffset(int)
Token.setType(java.lang.String) on Token.DEFAULT_TYPE
Token.clear(),
CharTermAttributeImpl.append(CharSequence),
Token.setStartOffset(int),
Token.setEndOffset(int)
Token.setType(java.lang.String)
Token.clear(),
CharTermAttributeImpl.append(CharSequence, int, int),
Token.setStartOffset(int),
Token.setEndOffset(int)
Token.setType(java.lang.String)
Token.clear(),
CharTermAttributeImpl.append(CharSequence),
Token.setStartOffset(int),
Token.setEndOffset(int)
Token.setType(java.lang.String) on Token.DEFAULT_TYPE
Token.clear(),
CharTermAttributeImpl.append(CharSequence, int, int),
Token.setStartOffset(int),
Token.setEndOffset(int)
Token.setType(java.lang.String) on Token.DEFAULT_TYPE
IndexReader.ReaderFinishedListener.
IndexReader.reopen(), except you can change the
readOnly of the original reader.
TeeSinkTokenFilter.SinkTokenStream.reset().
TokenStream reuse.ReusableAnalyzerBase.TokenStreamComponents instance.
ReusableAnalyzerBase.TokenStreamComponents instance.
ReusableAnalyzerBase.createComponents(String, Reader) to obtain an
instance of ReusableAnalyzerBase.TokenStreamComponents.
FieldCache using getStringIndex()
and reverses the order.IndexWriter without committing
any changes that have occurred since the last commit
(or since it was opened, if commit hasn't been called).
Lock.With.doBody() while lock is obtained.
Scorer which wraps another scorer and caches the score of the
current document.Scorer.Scorer(Weight) instead.
Scorer.Scorer(Weight) instead.
Scorer which scores documents in/out-of order according
to scoreDocsInOrder.
BooleanClause.Occur.SHOULD clause in a
BooleanQuery, and keeps the scores as computed by the
query.
BooleanClause.Occur.SHOULD clause in a
BooleanQuery, and keeps the scores as computed by the
query.
BooleanClause.Occur.SHOULD clause in a BooleanQuery, and keeps the
scores as computed by the query.
n
hits for query.
n
hits for query, applying filter if non-null.
IndexSearcher.search(Weight, Filter, int, Sort), but you choose
whether or not the fields in the returned FieldDoc instances should
be set by specifying fillFields.
Searchable in its own thread and waits for each search to complete and merge
the results back together.
n
hits for query, applying filter if non-null.
n
hits for query.
TermEnum.
CheckIndex.Status.SegmentInfoStatus instances, detailing status of each segment.
MergeScheduler that simply does each merge
sequentially, using the current thread.bit to one.
true to allow leading wildcard characters.
Field names to load and the Set of Field names to load lazily.
b.
IndexSearcher.search(Query,Filter,int,Sort)).
IndexWriterConfig.setDefaultWriteLockTimeout(long) instead
double value.
double value.
true, this TokenFilter will preserve
positions of the incoming tokens (ie, accumulate and
set position increments of the removed tokens).
true to enable position increments in result query.
float value.
float value.
IndexDeletionPolicy implementation to be
specified.
IndexWriter to use by this merge policy.
FieldCacheSanityChecker.
int value.
int value.
true.
true.
long value.
long value.
IndexWriterConfig.setMaxBufferedDeleteTerms(int) instead.
IndexWriterConfig.setMaxBufferedDocs(int) instead.
Integer.MAX_VALUE for
64 bit JVMs and 256 MiBytes for 32 bit JVMs) used for memory mapping.
LimitTokenCountAnalyzer instead. Note that the
behvaior slightly changed - the analyzer limits the number of
tokens per token stream created, while this setting limits the
total number of tokens to index. This only matters if you index
many multi-valued fields though.
LogMergePolicy.setMaxMergeDocs(int) directly.
IndexWriterConfig.setMergedSegmentWarmer(org.apache.lucene.index.IndexWriter.IndexReaderWarmer)
instead.
LogMergePolicy.setMergeFactor(int) directly.
IndexWriterConfig.setMergePolicy(MergePolicy) instead.
MergePolicy is invoked whenever there are changes to the
segments in the index.
IndexWriterConfig.setMergeScheduler(MergeScheduler) instead
MultiTermQuery.CONSTANT_SCORE_AUTO_REWRITE_DEFAULT
when creating a PrefixQuery, WildcardQuery or RangeQuery.
IndexReader.setNorm(int, String, byte) instead, encoding the
float to byte with your Similarity's Similarity.encodeNormValue(float).
This method will be removed in Lucene 4.0
SetOnce.set(Object).
SetOnce.set(Object) is called more than once.IndexWriterConfig.OpenMode of the index.
PayloadProcessorProvider to use when merging payloads.
i as pivot value
IndexWriterConfig.setRAMBufferSizeMB(double) instead.
IndexInput.readBytes(byte[], int, int).
IndexWriter.getReader().
IndexWriterConfig.setReaderTermsIndexDivisor(int) instead.
IndexWriter.getReader().
Collector.collect(int).
IndexWriterConfig.setSimilarity(Similarity) instead
Similarity implementation used by this IndexWriter.
IndexWriterConfig.setTermIndexInterval(int)
LogMergePolicy.setUseCompoundFile(boolean).
IndexInput, that is
mentioned in the bug report.
IndexWriterConfig.setWriteLockTimeout(long) instead
FieldCache
using getShorts() and makes those values
available as other numeric types, casting as needed.Similarity or DefaultSimilarity instead.Similarity that delegates all methods to another.
Analyzer that filters LetterTokenizer
with LowerCaseFilterSimpleAnalyzer
SimpleAnalyzer.SimpleAnalyzer(Version) instead
FSDirectory
using java.io.RandomAccessFile.NativeFSLockFactory.
LockFactory using File.createNewFile().LockFactory for a single in-process instance,
meaning all locking will take place through this one instance.SingleTermEnum.
includeDocStores is true), or the size of all files except the store
files otherwise.
TermDocs.skipTo(int).
1 / (distance + 1).
IndexDeletionPolicy that wraps around any other
IndexDeletionPolicy and adds the ability to hold and later release
snapshots of an index.SnapshotDeletionPolicy wraps another IndexDeletionPolicy to
enable flexible snapshotting.
int back to a float.
long back to a double.
TermVectorEntrys.FieldCache.Parser.
FieldCache.Parser.
match whose end
position is less than or equal to end.
MultiTermQuery as a SpanQuery,
so it can be nested within other SpanQuery classes.BooleanClause.Occur.SHOULD clause in a BooleanQuery, and keeps the
scores as computed by the query.size terms.
include which
have no overlap with spans from exclude.
YES, rejected NO,
or rejected and enumeration should advance to the next document NO_AND_ADVANCE.SpanPositionCheckQuery.getMatch() lies between a start and end positionquery.
IndexReader
tries to make changes to the index (via IndexReader.deleteDocument(int), IndexReader.undeleteAll() or IndexReader.setNorm(int, java.lang.String, byte))
but changes have already been committed to the index
since this reader was instantiated.StandardTokenizer with StandardFilter, LowerCaseFilter and StopFilter, using a list of
English stop words.StandardAnalyzer.STOP_WORDS_SET).
StandardTokenizer.StandardFilter.StandardFilter(Version, TokenStream) instead.
StandardTokenizer.
AttributeSource.
AttributeSource.AttributeFactory
LetterTokenizer with LowerCaseFilter and StopFilter.StopAnalyzer.ENGLISH_STOP_WORDS_SET.
StopFilter.StopFilter(Version, TokenStream, Set, boolean) instead
StopFilter.StopFilter(Version, TokenStream, Set) instead
NumberTools.longToString(long)
CharArraySet.iterator(), which returns char[] instances.
timeToString or
dateToString back to a time, represented as a
Date object.
NumberTools.longToString(long) back to a
long.
timeToString or
dateToString back to a time, represented as the
number of milliseconds since January 1, 1970, 00:00:00 GMT.
n within its
sub-index.
n in the
array used to construct this searcher/reader.
n in the array
used to construct this searcher.
i and j in your data
Directory.sync(Collection) instead.
For easy migration you can change your code to call
sync(Collections.singleton(name))
AttributeSource states to store in the sink.CharTermAttribute instead.term.
TermDocs enumerator.
term.
TermPositions enumerator.
TermFreqVector to provide additional information about
positions in which each of the terms is found.t.
collator parameter will cause every single
index Term in the Field referenced by lowerTerm and/or upperTerm to be
examined.
lowerTerm
but less/equal than upperTerm.
lowerTerm but less/equal than upperTerm.
lowerTerm
but less/equal than upperTerm.
TermVectorEntrys first by frequency and then by
the term (case-sensitive)IndexReader.getTermFreqVector(int,String).TermPositionVector's
offset information.sqrt(freq).
TimeLimitingCollector is used to timeout search requests that
take longer than the maximum allowed search time limit.Collector with a specified timeout.
Token as instance for the basic attributes
and for all other attributes calls the given delegate factory.Token as instance for the basic attributes
and for all other attributes calls the given delegate factory.
Token as implementation for the basic
attributes and return the default impl (with "Impl" appended) for all other
attributes.
ReusableAnalyzerBase.createComponents(String, Reader) to obtain an
instance of ReusableAnalyzerBase.TokenStreamComponents and returns the sink of the
components.
TokenStream enumerates the sequence of tokens, either from
Fields of a Document or from query text.Attribute instances.
NumericTokenStream for indexing the numeric value.
Searcher.search(Query,Filter,int) and Searcher.search(Query,int).TopDocs output.Collector that sorts by SortField using
FieldComparators.Searcher.search(Query,Filter,int,Sort).Collector implementation that collects the top-scoring hits,
returning them as a TopDocs.size terms.
CharSequence interface.
field assumed to be the
default field and omitted.
Object.toString().Integer.MAX_VALUE.
true, if this platform supports unmapping mmapped files.
CharArrayMap.
CharArraySet.
term and then adding the new
document.
term and then adding the new
document.
MergePolicy is used for upgrading all existing segments of
an index when calling IndexWriter.optimize().MergePolicy and intercept optimize requests to
only upgrade segments written with previous Lucene versions.
ValueSource.LockFactory that wraps another LockFactory and verifies that each lock obtain/release
is "correct" (never results in two processes holding the
lock at the same time).baseClass and method declaration.
Scorer subclasses should implement this method if the subclass
itself contains multiple scorers to support gathering details for
sub-scorers via Scorer.ScorerVisitor
WhitespaceTokenizer.WhitespaceAnalyzer
WhitespaceAnalyzer.WhitespaceAnalyzer(Version) instead
AttributeSource.
AttributeSource.AttributeFactory.
WhitespaceTokenizer.WhitespaceTokenizer(Version, Reader) instead. This will
be removed in Lucene 4.0.
WhitespaceTokenizer.WhitespaceTokenizer(Version, AttributeSource, Reader)
instead. This will be removed in Lucene 4.0.
WhitespaceTokenizer.WhitespaceTokenizer(Version, AttributeSource.AttributeFactory, Reader)
instead. This will be removed in Lucene 4.0.
WildcardTermEnum.
Collectors with a MultiCollector.
name in Directory
d, in a format that can be read by the constructor BitVector.BitVector(Directory, String).
IndexWriterConfig.WRITE_LOCK_TIMEOUT instead
IndexOutput.writeString(java.lang.String)
IndexOutput.writeString(java.lang.String)
|
||||||||||
| PREV NEXT | FRAMES NO FRAMES | |||||||||