All Classes and Interfaces

Class
Description
Merge policy for testing, it is like an alcoholic.
Filters the incoming reader and makes all documents appear deleted.
Acts like the default codec but with additional asserts.
A collector that asserts that it is used correctly.
A DirectoryReader that wraps all its subreaders with AssertingLeafReader
Just like the default but with additional asserts.
Helper class that adds some extra checks to ensure correct usage of IndexSearcher and Weight.
Wraps the default KnnVectorsFormat and provides additional assertions.
A FilterLeafReader that can be used to apply additional checks for tests.
Wraps a BinaryDocValues but with additional asserts
Wraps a Bits but with additional asserts
Wraps a DocValuesSkipper but with additional asserts
Wraps a Fields but with additional asserts
Wraps a ImpactsEnum with additional checks
Wraps a NumericDocValues but with additional asserts
Wraps a SortedSetDocValues but with additional asserts
Wraps a docsenum with additional checks
Wraps a SortedDocValues but with additional asserts
Wraps a SortedNumericDocValues but with additional asserts
Wraps a SortedSetDocValues but with additional asserts
Wraps a StoredFields but with additional asserts
Wraps a Terms but with additional asserts
Wraps a TermVectors but with additional asserts
Just like the default live docs format but with additional asserts.
An implementation of Matches with additional consistency checks.
Just like the default but with additional asserts.
Just like the default point format but with additional asserts.
Just like the default postings format but with additional asserts.
Assertion-enabled query.
Wraps another Scorable and asserts that scores are reasonable and only called when positioned
Wraps a Scorer with additional checks
wraps a similarity with checks for testing
Just like the default stored fields format but with additional asserts.
Just like the default vectors format but with additional asserts.
Utilities for testing automata.
Lets you retrieve random strings accepted by an Automaton.
Base test case for BitSets.
Base class for Directories that "chunk" the input into blocks.
Abstract class to do basic tests for a compound format.
Extends BaseDocValuesFormatTestCase to add compression checks.
Base class for Directory implementations.
Calls check index on close.
Base test class for DocIdSets.
Tests primitive queries (ie: that rewrite to themselves) to insure they match the expected set of docs, and that the score of each match is equal to the value of the scores explanation.
Abstract class to do basic tests for fis format.
Abstract class to do basic tests for a geospatial impl (high level fields and queries) NOTE: This test focuses on geospatial (distance queries, polygon queries, etc) indexing and search, not any underlying storage format or encoding: it merely supplies two hooks for the encoding so that tests can be exact.
Common tests to all index formats.
A directory that tracks created files that haven't been deleted.
A directory that tracks read bytes.
Base class aiming at testing vectors formats.
Abstract class that performs basic testing of a codec's LiveDocsFormat.
Base class for per-LockFactory tests.
Base test case for MergePolicy.
Statistics about bytes written to storage.
Simple mock merge context for tests
Abstract class to do basic tests for a norms format.
Abstract class to do basic tests for a points format.
Abstract class to do basic tests for a postings format.
Abstract class to do basic tests for a RangeField query.
base class for range verification
supported query relations
Abstract class to do basic tests for si format.
Abstract class to do basic tests for a similarity.
Base class aiming at testing stored fields formats.
Base class aiming at testing term vectors formats.
A combination of term vectors options.
Produces a random TokenStream based off of provided terms.
Base class for CheckIndex tests.
Base class for testing tokenstream factories.
Base class for all Lucene unit tests that use TokenStreams.
Attribute that records if it was cleared or not.
Attribute that records if it was cleared or not.
Abstract class to do basic tests for a xy spatial impl (high level fields and queries)
Query wrapper that reduces the size of max-score blocks to more easily detect problems with the max-score logic.
A BulkScorer-backed scorer.
TokenStream from a canned list of binary (BytesRef-based) tokens.
Represents a binary token.
TokenStream from a canned list of Tokens.
Codec that tries to use as little ram as possible because he spent all his money on beer
Utility class for asserting expected hits in tests.
Asserts that the score explanation for every document matching a query corresponds with the true score.
an IndexSearcher that implicitly checks hte explanation of every match whenever it executes a search.
Asserts that the Matches from a query is non-null whenever the document its created for is a hit.
Just collects document ids into a set.
Base test class for testing Unicode collation.
A codec that uses Lucene90CompressingStoredFieldsFormat for its stored fields and delegates to the default codec for everything else.
This codec allows customization of the number of connections made for an hnsw index.
Corrupts on bit of a file after close
Codec for testing that throws random IOExceptions
Throws IOException from random Tokenstream methods.
CompressionCodec that uses DeflateWithPresetDictCompressionMode.
Disables actual calls to fsync.
A Query wrapper that disables bulk-scoring optimizations.
Helper functions for tests that handles documents
CompressionCodec that does not compress data, useful for testing.
Draws shapes on the earth surface and renders using the very cool http://www.webglearth.org.
This class was automatically generated by generateEmojiTokenizationTest.pl.
Converts numbers to english strings for testing.
Adds extra files/subdirectories when directories are created.
Hackidy-Häck-Hack to cause a test to fail on non-bulk merges
A RunListener that detects suite/ test failures.
CompressionCodec that uses CompressionMode.FAST
CompressionCodec that uses CompressionMode.FAST_DECOMPRESSION
A FilterLeafReader that exposes only a subset of fields from the underlying wrapped reader.
A FilterAsynchronousFileChannel contains another AsynchronousFileChannel, which it uses as its basic source of data, possibly transforming the data along the way or providing additional functionality.
A FilterDirectoryStream contains another DirectoryStream, which it uses as its basic source of data, possibly transforming the data along the way or providing additional functionality.
A FilterFileChannel contains another FileChannel, which it uses as its basic source of data, possibly transforming the data along the way or providing additional functionality.
A FilterFileStore contains another FileStore, which it uses as its basic source of data, possibly transforming the data along the way or providing additional functionality.
A FilterFileSystem contains another FileSystem, which it uses as its basic source of data, possibly transforming the data along the way or providing additional functionality.
A FilterFileSystemProvider contains another FileSystemProvider, which it uses as its basic source of data, possibly transforming the data along the way or providing additional functionality.
A FilterInputStream2 contains another InputStream, which it uses as its basic source of data, possibly transforming the data along the way or providing additional functionality.
A FilterOutputStream2 contains another OutputStream, which it uses as its basic source of data, possibly transforming the data along the way or providing additional functionality.
A FilterPath contains another Path, which it uses as its basic source of data, possibly transforming the data along the way or providing additional functionality.
A FilterSeekableByteChannel contains another SeekableByteChannel, which it uses as its basic source of data, possibly transforming the data along the way or providing additional functionality.
Collector that accumulates matching docs in a FixedBitSet
A MergePolicy that only returns forced merges.
Helper class to test FSTs.
Holds one input/output pair.
static methods for testing geo
FileSystem that throws exception if file handles in use exceeds a specified limit.
An annotation
Base class for tracking file handles.
CompressionCodec that uses CompressionMode.HIGH_COMPRESSION
FileSystem that tracks open handles.
Abstract class to do basic tests for a docvalues format.
Minimal port of benchmark's LneDocSource + DocMaker, so tests can enum docs from a line file created by benchmark's WriteLineDoc task
An abstract TokenFilter to make it easier to build graph token filters requiring some lookahead.
Holds all state for a single position; subclass this to record other state at each position.
Customized version of Lucene101PostingsFormat that uses FixedGapTermsIndexWriter.
Backwards compatible test* method provider (public, non-static).
Base class for all Lucene unit tests, Junit3 or Junit4 variant.
Annotation for tests which exhibit a known issue and are temporarily disabled.
What level of concurrency is supported by the searcher being created
Annotation for monster tests that require special setup (e.g.
Annotation for tests that should only be run during nightly builds.
Annotation for test classes that should avoid certain codec types (because they are expensive, for example).
Annotation for test classes that should avoid mock filesystem types (because they test a bug that only happens on linux, for example).
Annotation for test classes that should avoid always omit actual fsync calls from reaching the filesystem.
Suppress the default reproduce with: ant test... Your own listener can be added as needed for your build.
Ignore TestRuleLimitSysouts for any suite which is known to print over the default limit of bytes to System.out or System.err.
Marks any suites which are known not to close all the temporary files.
A Consumer that can throw any checked exception.
A runnable that can throw any checked exception.
Annotation for tests that should only be run during weekly builds
Customized version of Lucene101PostingsFormat that uses VariableGapTermsIndexWriter with a fixed interval, but forcing high docfreq terms to be indexed terms.
Customized version of Lucene101PostingsFormat that uses VariableGapTermsIndexWriter with a fixed interval.
CompressionCodec that uses LZ4WithPresetDictCompressionMode.
Base class for tests checking the Weight.matches(LeafReaderContext, int) implementations
Encapsulates a term position, start and end offset
CodecReader wrapper that performs all reads using the merging instance of the index formats.
DirectoryReader wrapper that uses the merge instances of the wrapped CodecReaders.
Shuffles field numbers around to try to trip bugs where field numbers are assumed to always be consistent across segments.
A DirectoryReader that wraps all its subreaders with MismatchedLeafReader
Shuffles field numbers around to try to trip bugs where field numbers are assumed to always be consistent across segments.
Analyzer for testing
Analyzer for testing that encodes terms as UTF-16 bytes.
the purpose of this charfilter is to send offsets out of bounds if the analyzer doesn't use correctOffset or does incorrect offset math.
This is a Directory Wrapper that adds methods intended to be used only by unit tests.
Objects that represent fail-able conditions.
Use this when throwing fake IOException, e.g.
Enum for controlling hard disk throttling.
Base class for testing mockfilesystems.
TokenFilter that adds random fixed-length payloads.
Randomly inserts overlapped (posInc=0) tokens with posLength sometimes > 1.
Randomly injects holes (similar to what a stopfilter would do)
Used by MockDirectoryWrapper to create an input stream that keeps track of when it's been closed.
Used to create an output stream that will throw an IOException on fake disk full, track max disk space actually used, and maybe throw random IOExceptions.
Mock IndexWriterEventListener to verify invocation of event methods
A lowercasing TokenFilter.
Wraps a whitespace tokenizer with a filter that sets the first token, and odd tokens to posinc=1, and all others to 0, encoding the position as pos: XXX in the payload.
Uses LookaheadTokenFilter to randomly peek at future tokens.
MergePolicy that makes random decisions for testing.
Randomly combines terms index impl w/ postings impls.
Wraps a Reader, and can throw random or fixed exceptions, and spoon feed read chars.
adds synonym of "dog" for "dogs", and synonym of "cavy" for "guinea pig".
adds synonym of "dog" for "dogs", and synonym of "cavy" for "guinea pig".
A tokenfilter for testing that removes terms accepted by a DFA.
Tokenizer for testing.
Extension of CharTermAttributeImpl that encodes the term text as UTF-16 bytes instead of as UTF-8 bytes.
TokenFilter that adds random variable-length payloads.
Prints nothing.
A MultiReader that has its own cache key, occasionally useful for testing purposes.
Utility class to do efficient primary-key (only 1 doc contains the given term) lookups by segment, re-using the enums.
Simple utility class to track the current BKD stack based solely on calls to PointValues.IntersectVisitor.compare(byte[], byte[]).
Utility class for sanity-checking queries.
Last minute patches.
Stores all postings data in RAM, but writes a small token (header + single int) to identify which "slot" the index is using in RAM HashMap.
Crawls object graph to collect RAM usage for testing
An accumulator of object references.
A Query that adds random approximations to its scorers.
A wrapper around a DocIdSetIterator that matches the same documents, but introduces false positives that need to be verified via TwoPhaseIterator.matches().
Codec that assigns per-field random postings formats.
Silly class that randomizes the indexing experience.
Simple interface that is executed for each TP InfoStream component message.
Helper class extracted from BasePostingsFormatTestCase to exercise a postings format.
Holds one field, term and ord.
Which features to test.
Given the same random seed this always enumerates the same random postings
Similarity implementation that randomizes Similarity implementations per-field.
Delegates all operations, even optional ones, to the wrapped directory.
Sneaky: rethrowing checked exceptions as unchecked ones.
Test utility for simple ROT13 cipher (https://en.wikipedia.org/wiki/ROT13).
A suite listener printing a "reproduce string".
An IndexSearcher that always uses the Scorer API, never BulkScorer.
Simple base class for checking search equivalence.
A Directory wrapper that counts the number of times that Lucene may wait for I/O to return serially.
generates random cartesian geometry; heavy reuse of GeoTestUtil
Base test class for simulating distributed search across multiple shards.
An IndexSearcher and associated version (lease)
Thrown when the lease for a searcher has expired.
Gives an unpredictable, but deterministic order to directory listings.
Simple payload filter that sets the payload as pos: XXXX
Fake resource loader for tests: works if you want to fake reading a single file
STUniformSplitPostingsFormat with block encoding using ROT13 cypher.
A ConcurrentMergeScheduler that ignores AlreadyClosedException.
A class used for testing BloomFilteringPostingsFormat with a concrete delegate (Lucene41).
Require assertions for Lucene packages.
This rule keeps a count of failed tests (suites) and will result in an AssumptionViolatedException after a given number of failures for all tests following this condition.
This rule will cause the suite to be assumption-ignored if the test class implements a given marker interface and a special property is not set.
Marker interface for nested suites that should be ignored if executed in stand-alone mode.
This test rule serves two purposes: it fails the test if it prints too much to stdout and stderr (tests that chatter too much are discouraged) the rule ensures an absolute hard limit of stuff written to stdout and stderr to prevent accidental infinite loops from filling all available disk space with persisted output.
An annotation specifying the limit of bytes per class.
A rule for marking failed tests and suites.
Restore a given set of system properties to a snapshot taken at the beginning of the rule.
Stores the suite name so you can retrieve it from TestRuleStoreClassName.getTestClass()
A SecurityManager that prevents tests calling System.exit(int).
General utility methods for Lucene unit tests.
Utility class that spawns multiple indexing and searching threads.
Intentionally slow IndexOutput for testing.
time unit constants for use in annotations.
A Token is an occurrence of a term from the text of a field.
Consumes a TokenStream and outputs the dot (graphviz) string (graph).
UniformSplitPostingsFormat with block encoding using ROT13 cypher.
A TokenFilter that checks consistency of the tokens (eg offsets are consistent with one another).
FileSystem that records all major destructive filesystem activities.
Enforce test naming convention.
Acts like a virus checker on Windows, where random programs may open the files you just wrote in an unfriendly way preventing deletion (e.g.
Utility class for doing vocabulary-based stemming tests
FileSystem that (imperfectly) acts like windows.
This class was automatically generated by generateJavaUnicodeWordBreakTest.pl from: http://www.unicode.org/Public/12.1.0/ucd/auxiliary/WordBreakTest.txt