Lucene 3.5.0 API

Apache Lucene is a high-performance, full-featured text search engine library.


org.apache.lucene Top-level package.
org.apache.lucene.analysis API and code to convert text into indexable/searchable tokens.
org.apache.lucene.analysis.standard The org.apache.lucene.analysis.standard package contains three fast grammar-based tokenizers constructed with JFlex:
org.apache.lucene.document The logical representation of a Document for indexing and searching.
org.apache.lucene.index Code to maintain and access indices.
org.apache.lucene.messages For Native Language Support (NLS), system of software internationalization.
org.apache.lucene.queryParser A simple query parser implemented with JavaCC. Code to search indices.
Programmatic control over documents scores.
The payloads package provides Query mechanisms for finding and using payloads. The calculus of spans. Binary i/o API, used for all index data.
org.apache.lucene.util Some utility classes.
org.apache.lucene.util.collections Various optimized Collections implementations.
org.apache.lucene.util.encoding Offers various encoders and decoders for integers, as well as the mechanisms to create new ones.
org.apache.lucene.util.fst Finite state transducers
org.apache.lucene.util.packed The packed package provides random access capable arrays of positive longs.


contrib: Analysis Analyzer for Arabic. Analyzer for Bulgarian. Analyzer for Brazilian Portuguese. Analyzer for Catalan.
org.apache.lucene.analysis.cjk Analyzer for Chinese, Japanese, and Korean, which indexes bigrams (overlapping groups of two adjacent Han characters). Analyzer for Chinese, which indexes unigrams (individual chinese characters).
Analyzer for Simplified Chinese, which indexes words.
SmartChineseAnalyzer Hidden Markov Model package.
org.apache.lucene.analysis.compound A filter that decomposes compound words you find in many Germanic languages into the word parts.
org.apache.lucene.analysis.compound.hyphenation The code for the compound word hyphenation is taken from the Apache FOP project. Analyzer for Czech.
org.apache.lucene.analysis.da Analyzer for Danish. Analyzer for German.
org.apache.lucene.analysis.el Analyzer for Greek.
org.apache.lucene.analysis.en Analyzer for English. Analyzer for Spanish. Analyzer for Basque.
org.apache.lucene.analysis.fa Analyzer for Persian. Analyzer for Finnish. Analyzer for French. Analyzer for Galician.
org.apache.lucene.analysis.hi Analyzer for Hindi. Analyzer for Hungarian.
org.apache.lucene.analysis.hunspell Stemming TokenFilter using a Java implementation of the Hunspell stemming algorithm.
org.apache.lucene.analysis.hy Analyzer for Armenian. Analysis components based on ICU Tokenizer that breaks text into words with the Unicode Text Segmentation algorithm. Analyzer for Indonesian. Analysis components for Indian languages. Analyzer for Italian. Analyzer for Latvian.
org.apache.lucene.analysis.miscellaneous Miscellaneous TokenStreams
org.apache.lucene.analysis.ngram Character n-gram tokenizers and filters. Analyzer for Dutch. Analyzer for Norwegian.
Provides various convenience classes for creating payloads on Tokens. Analyzer for Polish.
org.apache.lucene.analysis.position Filter for assigning position increments. Analyzer for Portuguese.
org.apache.lucene.analysis.query Automatically filter high-frequency stopwords.
org.apache.lucene.analysis.reverse Filter to reverse token text. Analyzer for Romanian. Analyzer for Russian.
org.apache.lucene.analysis.shingle Word n-gram filters
Implementations of the SinkTokenizer that might be useful.
org.apache.lucene.analysis.snowball TokenFilter and Analyzer implementations that use Snowball stemmers.
org.apache.lucene.analysis.stempel Stempel: Algorithmic Stemmer Analyzer for Swedish.
org.apache.lucene.analysis.synonym Analysis components for Synonyms. Analyzer for Thai. Analyzer for Turkish.
org.apache.lucene.analysis.wikipedia Tokenizer that is aware of Wikipedia syntax.


contrib: Benchmark

The benchmark contribution contains tools for benchmarking Lucene using standard, freely available corpora.

Benchmarking Lucene By Tasks.
org.apache.lucene.benchmark.byTask.feeds Sources for benchmark inputs: documents and queries.
org.apache.lucene.benchmark.byTask.programmatic Sample performance test written programmatically - no algorithm file is needed here.
org.apache.lucene.benchmark.byTask.stats Statistics maintained when running benchmark tasks.
org.apache.lucene.benchmark.byTask.tasks Extendable benchmark tasks.
org.apache.lucene.benchmark.byTask.utils Utilities used for the benchmark, and for the reports.
org.apache.lucene.benchmark.quality Search Quality Benchmarking.
org.apache.lucene.benchmark.quality.trec Utilities for Trec related quality benchmarking, feeding from Trec Topics and QRels inputs.
org.apache.lucene.benchmark.quality.utils Miscellaneous utilities for search quality benchmarking: query parsing, submission reports.


contrib: ICU
org.apache.lucene.collation CollationKeyFilter converts each token into its binary CollationKey using the provided Collator, and then encode the CollationKey as a String using IndexableBinaryStringTools, to allow it to be stored as an index term.


contrib: Demo


contrib: Facet
org.apache.lucene.facet Provides faceted indexing and search capabilities.
org.apache.lucene.facet.enhancements Enhanced category features
org.apache.lucene.facet.enhancements.association Association category enhancements
org.apache.lucene.facet.enhancements.params Enhanced category features
org.apache.lucene.facet.index Indexing of document categories
org.apache.lucene.facet.index.attributes Category attributes and their properties for indexing
org.apache.lucene.facet.index.categorypolicy Policies for indexing categories
org.apache.lucene.facet.index.params Indexing-time specifications for handling facets
org.apache.lucene.facet.index.streaming Expert: attributes streaming definition for indexing facets Faceted Search API Aggregating Facets during Faceted Search Parameters for Faceted Search Results of Faceted Search Sampling for facets accumulation
org.apache.lucene.facet.taxonomy Taxonomy of Categories Taxonomy implemented using a Lucene-Index
org.apache.lucene.facet.taxonomy.writercache Improves indexing time by caching a map of CategoryPath to their Ordinal
org.apache.lucene.facet.taxonomy.writercache.cl2o Category->Ordinal caching implementation using an optimized data-structures
org.apache.lucene.facet.taxonomy.writercache.lru An LRU cache implementation for the CategoryPath to Ordinal map
org.apache.lucene.facet.util Various utilities for faceted search


contrib: Grouping This module enables search result grouping with Lucene, where hits with the same value in the specified single-valued group field are grouped together.


contrib: Highlighter The highlight package contains classes to provide "keyword in context" features typically used to highlight search terms in the text of results pages. This is an another highlighter implementation.


contrib: Instantiated InstantiatedIndex, alternative RAM store for small corpora.


contrib: Join This module supports index-time joins while searching, where joined documents are indexed as a single document block using IndexWriter.addDocuments(java.util.Collection).


contrib: Memory
org.apache.lucene.index.memory High-performance single-document main memory Apache Lucene fulltext search index.


contrib: Misc


contrib: Queries Regular expression Query. Document similarity query generators.


contrib: Query Parser
org.apache.lucene.queryParser.analyzing QueryParser that passes Fuzzy-, Prefix-, Range-, and WildcardQuerys through the given analyzer.
org.apache.lucene.queryParser.complexPhrase QueryParser which permits complex phrase query syntax eg "(john jon jonathan~) peters*"
org.apache.lucene.queryParser.core Contains the core classes of the flexible query parser framework Contains the necessary classes to implement query builders
org.apache.lucene.queryParser.core.config Contains the base classes used to configure the query processing
org.apache.lucene.queryParser.core.messages Contains messages usually used by query parser implementations
org.apache.lucene.queryParser.core.nodes Contains query nodes that are commonly used by query parser implementations
org.apache.lucene.queryParser.core.parser Contains the necessary interfaces to implement text parsers
org.apache.lucene.queryParser.core.processors Interfaces and implementations used by query node processors
org.apache.lucene.queryParser.core.util Utility classes to used with the Query Parser
org.apache.lucene.queryParser.ext Extendable QueryParser provides a simple and flexible extension mechanism by overloading query field names.
org.apache.lucene.queryParser.precedence This package contains the Precedence Query Parser Implementation
org.apache.lucene.queryParser.precedence.processors This package contains the processors used by Precedence Query Parser
org.apache.lucene.queryParser.standard Contains the implementation of the Lucene query parser using the flexible query parser frameworks Standard Lucene Query Node Builders
org.apache.lucene.queryParser.standard.config Standard Lucene Query Configuration
org.apache.lucene.queryParser.standard.nodes Standard Lucene Query Nodes
org.apache.lucene.queryParser.standard.parser Lucene Query Parser
org.apache.lucene.queryParser.standard.processors Lucene Query Node Processors
org.apache.lucene.queryParser.surround.parser This package contains the QueryParser.jj source file for the Surround parser.
org.apache.lucene.queryParser.surround.query This package contains SrndQuery and its subclasses.


contrib: Spatial
org.apache.lucene.spatial.geohash Support for Geohash encoding, decoding, and filtering.
org.apache.lucene.spatial.tier Support for filtering based upon geographic location.


contrib: SpellChecker Suggest alternate spellings for words.


contrib: XML Query Parser
org.apache.lucene.xmlparser Parser that produces Lucene Query objects from XML streams.  


Apache Lucene is a high-performance, full-featured text search engine library. Here's a simple example how to use Lucene for indexing and searching (using JUnit to check if the results are what we expect):

    Analyzer analyzer = new StandardAnalyzer(Version.LUCENE_CURRENT);

    // Store the index in memory:
    Directory directory = new RAMDirectory();
    // To store an index on disk, use this instead:
    //Directory directory ="/tmp/testindex");
    IndexWriter iwriter = new IndexWriter(directory, analyzer, true,
                                          new IndexWriter.MaxFieldLength(25000));
    Document doc = new Document();
    String text = "This is the text to be indexed.";
    doc.add(new Field("fieldname", text, Field.Store.YES,
    // Now search the index:
    IndexReader ireader =; // read-only=true
    IndexSearcher isearcher = new IndexSearcher(ireader);
    // Parse a simple query that searches for "text":
    QueryParser parser = new QueryParser("fieldname", analyzer);
    Query query = parser.parse("text");
    ScoreDoc[] hits =, null, 1000).scoreDocs;
    assertEquals(1, hits.length);
    // Iterate through the results:
    for (int i = 0; i < hits.length; i++) {
      Document hitDoc = isearcher.doc(hits[i].doc);
      assertEquals("This is the text to be indexed.", hitDoc.get("fieldname"));

The Lucene API is divided into several packages:

To use Lucene, an application should:
  1. Create Documents by adding Fields;
  2. Create an IndexWriter and add documents to it with addDocument();
  3. Call QueryParser.parse() to build a query from a string; and
  4. Create an IndexSearcher and pass the query to its search() method.
Some simple examples of code which does this are: To demonstrate these, try something like:
> java -cp lucene.jar:lucene-demo.jar:lucene-analyzers-common.jar org.apache.lucene.demo.IndexFiles
  [ ... ]

> java -cp lucene.jar:lucene-demo.jar:lucene-analyzers-common.jar org.apache.lucene.demo.SearchFiles
Query: chowder
Searching for: chowder
34 total matching documents
  [ ... thirty-four documents contain the word "chowder" ... ]

Query: "clam chowder" AND Manhattan
Searching for: +"clam chowder" +manhattan
2 total matching documents
  [ ... two documents contain the phrase "clam chowder" and the word "manhattan" ... ]
    [ Note: "+" and "-" are canonical, but "AND", "OR" and "NOT" may be used. ]

Copyright © 2000-2011 Apache Software Foundation. All Rights Reserved.