|
||||||||||
PREV NEXT | FRAMES NO FRAMES |
FilteringTokenFilter.incrementToken()
.
AttributeSource
shall be stored
in the sink.
SpanQuery
.
true
if this collector does not
require the matching docIDs to be delivered in int sort
order (smallest to largest) to Collector.collect(int)
.
FilteredTermEnum.setEnum(org.apache.lucene.index.TermEnum)
BytesRef
BytesRef
with a pre-calculated hash code.
IndexWriter.getAnalyzer()
.
CompoundFileWriter.addFile(String)
, only for files that are found in an
external Directory
.
IndexWriter.addIndexes(Directory...)
instead
IndexReader.ReaderFinishedListener
.
TeeSinkTokenFilter.SinkTokenStream
created by another TeeSinkTokenFilter
to this one.
TermVectorEntry
.
String
to this character sequence.
StringBuilder
to this character sequence.
CharTermAttribute
to this character sequence.
List
view.
Set
view.
AttributeSource
.AttributeSource
or AttributeImpl
.AttributeImpl
s,
and methods to add and get them.AttributeSource.AttributeFactory.DEFAULT_ATTRIBUTE_FACTORY
.
AttributeSource.AttributeFactory
for creating new Attribute
instances.
AttributeImpl
s.CharFilter
.n
bits.
name
in Directory
d
, as written by the BitVector.write(org.apache.lucene.store.Directory, java.lang.String)
method.
BooleanQuery.getMaxClauseCount()
clauses.char[]
buffer size)
for encoding int
values.
char[]
buffer size)
for encoding long
values.
IndexInput
.IndexOutput
.FieldCache
using getBytes()
and makes those values
available as other numeric types, casting as needed.BytesRefHash
is a special purpose hash-map like data-structure
optimized for BytesRef
instances.BytesRefHash
with a ByteBlockPool
using a
ByteBlockPool.DirectAllocator
.
BytesRefHash
BytesRefHash
BytesRef
exceeds the BytesRefHash
limit of
ByteBlockPool.BYTE_BLOCK_SIZE
-2.BytesRefHash.BytesStartArray
that tracks all memory allocation using an AtomicLong
instance.ByteBlockPool
for the given ord
AtomicLong
reference holding the number of bytes used by this
BytesRefHash.BytesStartArray
.
FieldCacheSource
, already knowing that cache and field are equal.
FieldCacheSource
, without the hash-codes of the field
and the cache (those are taken care of elsewhere).
CachingWrapperFilter.DeletesMode.RECACHE
.
CharacterUtils
provides a unified interface to Character-related
operations to implement backwards compatible character operations based on a
Version
instance.CharacterUtils.fill(CharacterBuffer, Reader)
.CharArraySet.CharArraySet(Version, int, boolean)
instead
CharArraySet.CharArraySet(Version, Collection, boolean)
instead
char[]
instances.CharsRef
initialized an empty array zero-length
CharsRef
initialized with an array of the given
capacity
CharsRef
initialized with the given array, offset and
length
CharsRef
initialized with the given Strings character
array
CharsRef
and copies the contents of the source into
the new instance.
CharStream.correctOffset(int)
functionality over Reader
.CharTokenizer
instance
CharTokenizer
instance
CharTokenizer
instance
CharTokenizer.CharTokenizer(Version, Reader)
instead. This will be
removed in Lucene 4.0.
CharTokenizer.CharTokenizer(Version, AttributeSource, Reader)
instead. This will be
removed in Lucene 4.0.
CharTokenizer.CharTokenizer(Version, AttributeSource.AttributeFactory, Reader)
instead. This will be
removed in Lucene 4.0.
CheckIndex.Status
instance detailing
the state of the index.
CheckIndex.Status
instance detailing
the state of the index.
CheckIndex.checkIndex()
detailing the health and status of the index.IllegalStateException
if it is.
ClassicTokenizer
with ClassicFilter
, LowerCaseFilter
and StopFilter
, using a list of
English stop words.ClassicAnalyzer.STOP_WORDS_SET
).
ClassicTokenizer
.ClassicTokenizer
.
AttributeSource
.
AttributeSource.AttributeFactory
bit
to zero.
BytesRefHash.BytesStartArray
and returns the cleared instance.
BytesRef
which maps to the given BytesRef
AttributeImpl.clear()
on each Attribute implementation.
AttributeImpl
instances returned in a new
AttributeSource
instance.
CharSequence
.
KeywordTokenizer
with CollationKeyFilter
.CollationKey
, and then
encodes the CollationKey with IndexableBinaryStringTools
, to allow
it to be stored as an index term.Collector.collect(int)
on the decorated Collector
unless the allowed time has passed, in which case it throws an exception.
TwoPhaseCommit.commit()
, but takes an additional commit data to be included
w/ the commit.
i
and j
of you data.
NoMergePolicy
which indicates the index uses compound
files.
state.getBoost()*lengthNorm(numTerms)
, where
numTerms
is FieldInvertState.getLength()
if DefaultSimilarity.setDiscountOverlaps(boolean)
is false, else it's FieldInvertState.getLength()
- FieldInvertState.getNumOverlap()
.
FieldInvertState
).
MergeScheduler
that runs each merge using a
separate thread.MultiTermQuery.ConstantScoreAutoRewrite
, with ConstantScoreAutoRewrite.setTermCountCutoff(int)
set to
ConstantScoreAutoRewrite.DEFAULT_TERM_COUNT_CUTOFF
and ConstantScoreAutoRewrite.setDocCountPercent(double)
set to
ConstantScoreAutoRewrite.DEFAULT_DOC_COUNT_PERCENT
.
MultiTermQuery.SCORING_BOOLEAN_QUERY_REWRITE
except
scores are not computed.
ScoringRewrite.SCORING_BOOLEAN_QUERY_REWRITE
except
scores are not computed.
TeeSinkTokenFilter
passes all tokens to the added sinks
when itself is consumed.
len
chars of text
starting at off
are in the set
CharSequence
is in the set
len
chars of text
starting at off
are in the CharArrayMap.keySet
CharSequence
is in the CharArrayMap.keySet
overlap / maxOverlap
.
CharArrayMap
.
CharArraySet.copy(Version, Set)
instead.
CharArraySet
.
Directory
to under the new
file name dest.
Directory.copy(Directory, String, String)
for every file that
needs copying. You can use the following code:
IndexFileNameFilter filter = IndexFileNameFilter.getFilter(); for (String file : src.listAll()) { if (filter.accept(null, file)) { src.copy(dest, file, file); } }
BytesRef
at the current positions (
ByteBlockPool.byteUpto
across buffer boundaries
CharsRef
referenced content into this instance
starting at offset 0.
numBytes
bytes to the given IndexOutput
.
AttributeSource
to the given target AttributeSource
.
CachingCollector
which does not wrap another collector.
CachingCollector
that wraps the given collector and
caches documents and scores up to the specified RAM threshold.
CachingCollector
that wraps the given collector and
caches documents and scores up to the specified max docs threshold.
TopFieldCollector
from the given
arguments.
TopScoreDocCollector
given the number of hits to
collect and whether documents are scored in order by the input
Scorer
to TopScoreDocCollector.setScorer(Scorer)
.
AttributeImpl
for the supplied Attribute
interface class.
ReusableAnalyzerBase.TokenStreamComponents
instance for this analyzer.
ReusableAnalyzerBase.TokenStreamComponents
used to tokenize all the text in the provided Reader
.
Query
.
Query
.
Weight
implementations.
Subclasses of Searcher should use Searcher.createNormalizedWeight(org.apache.lucene.search.Query)
, instead.
ValueSourceQuery
scores.
CustomScoreQuery.getCustomScoreProvider(org.apache.lucene.index.IndexReader)
, if you want
to modify the custom score calculation of a CustomScoreQuery
.IndexReader
.
ValueSourceQuery
.
ValueSourceQuery
.
DateTools
or
NumericField
instead.
This class is included for use with existing
indices and will be removed in a future release (possibly Lucene 4.0).IndexableBinaryStringTools.decode(char[], int, int, byte[], int, int)
instead. This method will be removed in Lucene 4.0
IndexableBinaryStringTools.decode(char[], int, int, byte[], int, int)
instead. This method will be removed in Lucene 4.0
Similarity.decodeNormValue(byte)
instead.
AttributeImpl
s using the
class name of the supplied Attribute
interface class by appending Impl
to it.
Byte.toString(byte)
Double.toString(double)
Float.toString(float)
TimeLimitingCollector.isGreedy()
.
Integer.toString(int)
Long.toString(long)
IndexWriterConfig.DEFAULT_MAX_BUFFERED_DELETE_TERMS
instead
IndexWriterConfig.DEFAULT_MAX_BUFFERED_DOCS
instead.
IndexWriterConfig
IndexWriterConfig.DEFAULT_RAM_BUFFER_SIZE_MB
instead.
IndexWriterConfig.setReaderPooling(boolean)
.
Short.toString(short)
IndexWriterConfig.DEFAULT_TERM_INDEX_INTERVAL
instead.
docNum
.
term
indexed.
term
.
IndexWriterConfig.DISABLE_AUTO_FLUSH
instead
i
.
Document
at the n
th position.
t
.
Searchable
's docFreq() in its own thread and waits for each search to complete and merge
the results back together.
term
.
term
.
DocIdSetIterator.NO_MORE_DOCS
if DocIdSetIterator.nextDoc()
or
DocIdSetIterator.advance(int)
were not called yet.
n
th
Document
in this index.
Document
at the n
th position.
docNum
.
IndexWriter.merge(org.apache.lucene.index.MergePolicy.OneMerge)
double
value to a sortable signed long
.
ReentrantLock
to disable lockingUAX29URLEmailTokenizer.TOKEN_TYPES
instead
DocIdSet
instance for easy use, e.g.
TermPositionVector
that stores only position information.
IndexableBinaryStringTools.encode(byte[], int, int, char[], int, int)
instead. This method will be removed in Lucene 4.0
IndexableBinaryStringTools.encode(byte[], int, int, char[], int, int)
instead. This method will be removed in Lucene 4.0
Similarity.encodeNormValue(float)
instead.
end()
on the
input TokenStream.
NOTE: Be sure to call super.end()
first when overriding this method.
TokenStream.incrementToken()
returned false
(using the new TokenStream
API).
AlreadyClosedException
if this IndexWriter has been
closed.
o
is equal to this.
o
is equal to this.
o
is equal to this.
o
is equal to this.
ValueSourceQuery.equals(Object)
.
o
is equal to this.
o
is equal to this.
o
is equal to this.
o
is equal to this.
o
is equal to this.
o
is equal to this.
o
is equal to this.
o
is equal to this.
\
.
TwoPhaseCommit.prepareCommit()
all objects and only if all succeed,
it proceeds with TwoPhaseCommit.commit()
.
doc
scored against
query
.
doc
scored against
weight
.
doc
scored against
weight
.
doc
scored against
query
.
Scorer
,
but it is needed by SpanWeight to build an explanation.
IndexWriter.expungeDeletes()
, except you can
specify whether the call should block until the
operation completes.
instead
instead
Field
.FieldCache
).FieldCache
.Filter
that only accepts documents whose single
term value in the specified field is contained in the
provided set of allowed terms.TopFieldCollector
.FieldCache.getBytes(org.apache.lucene.index.IndexReader, java.lang.String)
and sorts by ascending valueFieldCache.getDoubles(org.apache.lucene.index.IndexReader, java.lang.String)
and sorts by ascending valueFieldCache.getFloats(org.apache.lucene.index.IndexReader, java.lang.String)
and sorts by ascending valueFieldCache.getInts(org.apache.lucene.index.IndexReader, java.lang.String)
and sorts by ascending valueFieldCache.getLongs(org.apache.lucene.index.IndexReader, java.lang.String)
and sorts by ascending valueFieldCache.getShorts(org.apache.lucene.index.IndexReader, java.lang.String)
and sorts by ascending valueFieldComparator
for custom field sorting.SpanQuery
objects participate in composite
single-field SpanQueries by 'lying' about their search field.null
as its
detail message.
FileFilter
, the FieldSelector allows one to make decisions about
what Fields get loaded on a Document
by IndexReader.document(int,org.apache.lucene.document.FieldSelector)
TermVectorEntry
s
This is not thread-safe.CharacterUtils.CharacterBuffer
with characters read from the given
reader Reader
.
FilterIndexReader
contains another IndexReader, which it
uses as its basic source of data, possibly transforming the data along the
way or providing additional functionality.TermDocs
implementations.TermEnum
implementations.TermPositions
implementations.CachingWrapperFilter
if you wish to cache
Filter
s.MergePolicy.MergeSpecification
if so.
ChecksumIndexOutput.prepareCommit()
FixedBitSet.getBits()
) long[], accessed with an int index,
implementing Bits and DocIdSet.CheckIndex.checkIndex()
.
Tokenizer
chain,
eg from one TokenFilter to another one.FieldCache
using getFloats()
and makes those values
available as other numeric types, casting as needed.float
value to a sortable signed int
.
numBytes
.
minimumSimilarity
to term
.
FuzzyQuery(term, minimumSimilarity, prefixLength, Integer.MAX_VALUE)
.
FuzzyQuery(term, minimumSimilarity, 0, Integer.MAX_VALUE)
.
FuzzyQuery(term, 0.5f, 0, Integer.MAX_VALUE)
.
reader
which share a prefix of
length prefixLength
with term
and which have a fuzzy similarity >
minSimilarity
.
len
chars of text
starting at off
CharSequence
true
if bit
is one and
false
if it is zero.
BytesRef
with the bytes for the given ord.
SetOnce.set(Object)
.
bit
to true, and
returns true if bit was already set
Float.NaN
if this
DocValues instance does not contain any value.
null
for numeric fields
Document.setBoost(float)
.
field
as a single byte and returns an array
of size reader.maxDoc()
of the value each document
has in the given field.
field
as bytes and returns an array of
size reader.maxDoc()
of the value each document has in the
given field.
IndexWriter.commit(Map)
, from current index
segments file.
FieldComparator
to use for
sorting.
FieldComparatorSource
used for
custom sorting
IndexWriterConfig
, cloned
from the IndexWriterConfig
passed to
IndexWriter.IndexWriter(Directory, IndexWriterConfig)
.
CustomScoreProvider
that calculates the custom scores
for the given IndexReader
.
null
if not yet set.
IndexableBinaryStringTools.getDecodedLength(char[], int, int)
instead. This
method will be removed in Lucene 4.0
IndexWriterConfig.getDefaultWriteLockTimeout()
instead
Directory
for the index.
Directory
of the index that hit
the exception.
PayloadProcessorProvider.DirPayloadProcessor
for the given Directory
,
through which PayloadProcessorProvider.DirPayloadProcessor
s can be obtained for each
Term
, or null
if none should be used.
DocIdSet
enumerating the documents that should be
permitted in search results.
field
as integers and returns an array
of size reader.maxDoc()
of the value each document
has in the given field.
field
as doubles and returns an array of
size reader.maxDoc()
of the value each document has in the
given field.
StopFilter.StopFilter(Version, TokenStream, Set)
instead
IndexableBinaryStringTools.getEncodedLength(byte[], int, int)
instead. This
method will be removed in Lucene 4.0
Document.getFieldable(java.lang.String)
instead and cast depending on
data type.
Fieldable
name.
Fieldable
s with the given name.
QueryParser.getFieldQuery(String,String,boolean)
instead.
QueryParser.getFieldQuery(String,String,boolean)
.
Document.getFieldable(java.lang.String)
instead and cast depending on
data type.
FSDirectory.getDirectory()
instead.
null
if a query is wrapped.
field
as floats and returns an array
of size reader.maxDoc()
of the value each document
has in the given field.
field
as floats and returns an array
of size reader.maxDoc()
of the value each document
has in the given field.
QueryParser.getWildcardQuery(java.lang.String, java.lang.String)
).
baseClass
in which this method is overridden/implemented
in the inheritance path between baseClass
and the given subclass subclazz
.
IndexCommit
as specified in
IndexWriterConfig.setIndexCommit(IndexCommit)
or the default, null
which specifies to open the latest index commit point.
IndexDeletionPolicy
specified in
IndexWriterConfig.setIndexDeletionPolicy(IndexDeletionPolicy)
or the default
KeepOnlyLastCommitDeletionPolicy
/
IndexReader
this searches.
FieldCache.setInfoStream(PrintStream)
CharacterUtils
implementation according to the given
Version
instance.
field
as integers and returns an array
of size reader.maxDoc()
of the value each document
has in the given field.
field
as integers and returns an array of
size reader.maxDoc()
of the value each document has in the
given field.
CharacterUtils.CharacterBuffer.getOffset()
field
as longs and returns an array
of size reader.maxDoc()
of the value each document
has in the given field.
field
as longs and returns an array of
size reader.maxDoc()
of the value each document has in the
given field.
IndexWriterConfig.getMaxBufferedDeleteTerms()
instead
IndexWriterConfig.getMaxBufferedDocs()
instead.
LimitTokenCountAnalyzer
to limit number of tokens.
ConcurrentMergeScheduler.setMaxMergeCount(int)
.
LogMergePolicy.getMaxMergeDocs()
directly.
Float.NaN
if this
DocValues instance does not contain any value.
IndexWriterConfig.getMergedSegmentWarmer()
instead.
LogMergePolicy.getMergeFactor()
directly.
IndexWriterConfig.getMergePolicy()
instead
IndexWriterConfig.getMergeScheduler()
instead
MergeScheduler
that was set by
IndexWriterConfig.setMergeScheduler(MergeScheduler)
MergePolicy
to avoid
selecting merges for segments already being merged.
Float.NaN
if this
DocValues instance does not contain any value.
MergeScheduler
calls this method
to retrieve the next merge requested by the
MergePolicy
Number
, null
if not yet initialized.
positionIncrement == 0
.
Analyzer.getPositionIncrementGap(java.lang.String)
, except for
Token offsets instead.
PositionBasedTermVectorMapper.TVPositionInfo.getTerms()
) of TermVectorOffsetInfo objects.
AbstractField.getIndexOptions()
instead.
IndexWriterConfig.OpenMode
set by IndexWriterConfig.setOpenMode(OpenMode)
.
null
for T
is String
}
FieldCache
parser that fits to the given sort type.
PayloadProcessorProvider
that is used during segment
merges to process payloads.
QueryParser.getWildcardQuery(java.lang.String, java.lang.String)
).
PayloadProcessorProvider.DirPayloadProcessor
for the given term.
null
if a filter is wrapped.
IndexWriterConfig.getRAMBufferSizeMB()
instead.
IndexWriterConfig.setRAMBufferSizeMB(double)
if enabled.
DataInput.readBytes(byte[], int, int)
.
IndexReader.open(IndexWriter,boolean)
instead.
IndexReader.open(IndexWriter,boolean)
instead. Furthermore,
this method cannot guarantee the reader (and its
sub-readers) will be opened with the
termInfosIndexDivisor setting because some of them may
have already been opened according to IndexWriterConfig.setReaderTermsIndexDivisor(int)
. You
should set the requested termInfosIndexDivisor through
IndexWriterConfig.setReaderTermsIndexDivisor(int)
and use
IndexWriter.getReader()
.
IndexWriter.getReader()
has not been called.
IndexWriterConfig.getReaderTermsIndexDivisor()
instead.
TieredMergePolicy.setReclaimDeletesWeight(double)
.
Searchable
s this searches.
segments_N
) associated
with this commit point.
PriorityQueue.initialize(int)
to fill the queue, so
that the code which uses that queue can always assume it's full and only
change the top without attempting to insert any new object.PriorityQueue.lessThan(T, T)
should always favor the
non-sentinel values).field
as shorts and returns an array
of size reader.maxDoc()
of the value each document
has in the given field.
field
as shorts and returns an array of
size reader.maxDoc()
of the value each document has in the
given field.
IndexWriterConfig.getSimilarity()
instead
Similarity
implementation used by this
IndexWriter.
SnapshotDeletionPolicy.SnapshotDeletionPolicy(IndexDeletionPolicy, Map)
in order to
initialize snapshots at construction.
Class.getResourceAsStream(String)
) and adds all words as entries to
a Set
.
field
and returns
an array of them in natural order, along with an array telling
which element in the term array each document uses.
field
and returns an array
of size reader.maxDoc()
containing the value each document
has in the given field.
TermFreqVector
.
IndexWriterConfig.getTermIndexInterval()
TokenStream
field
and returns a bit set at the size of
reader.maxDoc()
, with turned on bits for each docid that
does not have a value for this field.
LogMergePolicy.getUseCompoundFile()
IndexWriter.commit(Map)
for this commit.
true
, if the unmap workaround is enabled.
Class.getResourceAsStream(String)
) and adds every line as an entry
to a Set
(omitting leading and trailing whitespace).
Class.getResourceAsStream(String)
) and adds every line as an entry
to a Set
(omitting leading and trailing whitespace).
IndexWriterConfig.getWriteLockTimeout()
BytesRefHash.BytesStartArray
UAX29URLEmailTokenizer.TOKEN_TYPES
instead
ValueSourceQuery.hashCode()
.
o
is equal to this.
UAX29URLEmailTokenizer.TOKEN_TYPES
instead
UAX29URLEmailTokenizer.TOKEN_TYPES
instead
log(numDocs/(docFreq+1)) + 1
.
Similarity.idfExplain(Term,Searcher,int)
by passing
searcher.docFreq(term)
as the docFreq.
true
if the lower endpoint is inclusive
true
if the lower endpoint is inclusive
true
if the lower endpoint is inclusive
true
if the upper endpoint is inclusive
true
if the upper endpoint is inclusive
true
if the lower endpoint is inclusive
true
if the lower endpoint is inclusive
true
if the upper endpoint is inclusive
true
if the upper endpoint is inclusive
true
if the upper endpoint is inclusive
IndexWriter
) use this method to advance the stream to
the next token.
.f
+ a number and
from .s
+ a number.
IndexDeletionPolicy
or IndexReader
.index commits
.indexOf(int)
but searches for a number of terms
at the same time.
true
if an index exists at the specified directory.
matchesExtension
), as well as generating file names from a segment name,
generation and extension (
fileNameFromGeneration
,
segmentFileName
).Directory
.getTerms
at which the term with the specified
term
appears.
IndexReader.getFieldNames(FieldOption)
.IndexWriter
using the given
matchVersion
.
IndexWriter
using the given
matchVersion
.
IndexWriter
using the given
config.
IndexWriter
creates and maintains an index.IndexWriter.IndexWriter(Directory, IndexWriterConfig)
instead
IndexWriter.IndexWriter(Directory, IndexWriterConfig)
instead
IndexWriter.IndexWriter(Directory, IndexWriterConfig)
instead
IndexWriter.IndexWriter(Directory, IndexWriterConfig)
instead
IndexWriter.IndexWriter(Directory, IndexWriterConfig)
instead
conf
.
IndexWriter.getReader()
has been called (ie, this writer
is in near real-time mode), then after a merge
completes, this class can be invoked to warm the
reader on the newly merged segment, before the merge
commits.LimitTokenCountAnalyzer
instead.IndexWriter
.Version
as well as the default Analyzer
.
IndexWriter
:
IndexWriterConfig.OpenMode.CREATE
- creates a new index or overwrites an existing one.DataInput
wrapping a plain InputStream
.Comparator
.
Comparator
.
List
using the Comparator
.
List
in natural order.
NoMergeScheduler
Lock
is stateless.
FieldCache
using getInts()
and makes those values
available as other numeric types, casting as needed.shift
bits.
CachingWrapperFilter
, if this DocIdSet
should be cached without copying it into a BitSet.
Similarity.coord(int,int)
is disabled in
scoring for this query instance.
IndexFileNames.STORE_INDEX_EXTENSIONS
).
true
iff the current token is a keyword, otherwise
false
/
true
iff the current token is a keyword, otherwise
false
/
true
iff the index in the named directory is
currently locked.
ASCIIFoldingFilter
which covers a superset of Latin 1.
This class is included for use with existing
indexes and will be removed in a future release (possibly Lucene 4.0).baseClass
and the given subclass subclazz
.
SEPARATE_NORMS_EXTENSION + "[0-9]+"
.
IndexReader.getTermFreqVector(int,String)
.
IndexReader.getTermFreqVector(int,String)
.
CharTokenizer.isTokenChar(int)
instead. This method will be
removed in Lucene 4.0.
Character.isLetter(int)
.
Character.isWhitespace(int)
.
CharArraySet.CharArraySetIterator
depending on the version used:
if matchVersion
≥ 3.1, it returns char[]
instances in this set.
if matchVersion
is 3.0 or older, it returns new
allocated Strings, so this method violates the Set interface.
Iterator
of contained segments in order.
DocIdSetIterator
to access the set.
UAX29URLEmailTokenizer.TOKEN_TYPES
instead
IndexDeletionPolicy
implementation that
keeps only the most recent commit and immediately removes
all prior commits after a new commit is done.CharArraySet
view on the map's keys.
KeywordAttribute
.KeywordAttribute
.
KeywordAttribute
.
LengthFilter.LengthFilter(boolean, TokenStream, int, int)
instead.
fieldName
matching
less than or equal to upperTerm
.
AttributeSource
.
AttributeSource.AttributeFactory
.
LetterTokenizer.LetterTokenizer(Version, Reader)
instead. This
will be removed in Lucene 4.0.
LetterTokenizer.LetterTokenizer(Version, AttributeSource, Reader)
instead.
This will be removed in Lucene 4.0.
LetterTokenizer.LetterTokenizer(Version, AttributeSource.AttributeFactory, Reader)
instead. This will be removed in Lucene 4.0.
IndexWriter.DEFAULT_MAX_FIELD_LENGTH
Lock.obtain(long)
to try
forever to obtain the lock.
Lock.obtain(long)
waits, in milliseconds,
in between attempts to acquire the lock.
write.lock
could not be acquired.write.lock
could not be released.VerifyingLockFactory
.LogMergePolicy
that measures size of a
segment as the total byte size of the segment's files.LogMergePolicy
that measures size of a
segment as the number of documents (not taking deletions
into account).MergePolicy
that tries
to merge segments into levels of exponentially
increasing size, where each level has fewer segments than
the value of the merge factor.shift
bits.
LowerCaseFilter.LowerCaseFilter(Version, TokenStream)
instead.
AttributeSource
.
AttributeSource.AttributeFactory
.
LowerCaseTokenizer.LowerCaseTokenizer(Version, Reader)
instead. This will be
removed in Lucene 4.0.
LowerCaseTokenizer.LowerCaseTokenizer(Version, AttributeSource, Reader)
instead. This will be removed in Lucene 4.0.
LowerCaseTokenizer.LowerCaseTokenizer(Version, AttributeSource.AttributeFactory, Reader)
instead. This will be removed in Lucene 4.0.
SimpleAnalyzer
.
Lock
.
StopFilter.makeStopSet(Version, String...)
instead
StopFilter.makeStopSet(Version, List)
instead
StopFilter.makeStopSet(Version, String[], boolean)
instead;
StopFilter.makeStopSet(Version, List, boolean)
instead
map
.
FieldSelector
based on a Map of field names to FieldSelectorResult
sCharFilter
that applies the mappings
contained in a NormalizeCharMap
to the character
stream, and correcting the resulting changes to the
offsets.CharStream
.
Reader
.
IndexWriter.getNextMerge()
.
Sort
.
IndexWriter
uses an instance
implementing this interface to execute the merges
selected by a MergePolicy
.Comparator
.
Comparator
.
List
using the Comparator
.
List
in natural order.
SorterTemplate.insertionSort(int,int)
.
ConcurrentMergeScheduler.verbose()
was
called and returned true.
Directory
implementation that uses
mmap for reading, and FSDirectory.FSIndexOutput
for writing.NativeFSLockFactory
.
fieldName
matching
greater than or equal to lowerTerm
.
Collector
which allows running a search with several
Collector
s.MultiPhraseQuery.add(Term[])
.TermPositions
for multiple Term
s as
a single TermPositions
.MultipleTermPositions
instance.
Query
that matches documents
containing a subset of terms provided by a FilteredTermEnum
enumeration.BooleanClause.Occur.SHOULD
clause in a BooleanQuery, but the scores
are only computed as the boost.size
terms.
BooleanClause.Occur.SHOULD
clause in a BooleanQuery, and keeps the
scores as computed by the query.size
terms.
MultiTermQuery
, that exposes its
functionality as a Filter
.MultiTermQuery
as a Filter.
CustomScoreQuery.toString(String)
.
ThreadFactory
implementation that accepts the name prefix
of the created threads as a constructor argument.NamedThreadFactory
instance
LockFactory
using native OS file
locks.NearSpansOrdered
, but for the unordered case.FieldCache.getBytes(IndexReader,String)
.
FieldCache.getBytes(IndexReader,String,FieldCache.ByteParser)
.
CharacterUtils.CharacterBuffer
and allocates a char[]
of the given bufferSize.
FieldCache.getDoubles(IndexReader,String)
.
FieldCache.getDoubles(IndexReader,String,FieldCache.DoubleParser)
.
NumericRangeFilter
, that filters a double
range using the given precisionStep
.
NumericRangeFilter
, that queries a double
range using the default precisionStep
NumericUtils.PRECISION_STEP_DEFAULT
(4).
NumericRangeQuery
, that queries a double
range using the given precisionStep
.
NumericRangeQuery
, that queries a double
range using the default precisionStep
NumericUtils.PRECISION_STEP_DEFAULT
(4).
FieldCache.getFloats(IndexReader,String)
.