A TokenFilter that applies search term folding to Unicode text,
applying foldings from UTR#30 Character Foldings.
Normalize token text with ICU's
With this filter, you can normalize text in the following ways:
NFKC Normalization, Case Folding, and removing Ignorables (the default)
Using a standard Normalization mode (NFC, NFD, NFKC, NFKD)
Based on rules from a custom normalization mapping.
Package org.apache.lucene.analysis.icu Description
Analysis components based on ICU