Class CompoundWordTokenFilterBase
- All Implemented Interfaces:
Closeable
,AutoCloseable
,Unwrappable<TokenStream>
- Direct Known Subclasses:
DictionaryCompoundWordTokenFilter
,HyphenationCompoundWordTokenFilter
-
Nested Class Summary
Nested ClassesModifier and TypeClassDescriptionprotected class
Helper class to hold decompounded token informationNested classes/interfaces inherited from class org.apache.lucene.util.AttributeSource
AttributeSource.State
-
Field Summary
FieldsModifier and TypeFieldDescriptionprivate AttributeSource.State
static final int
The default for maximal length of subwords that get propagated to the output of this filterstatic final int
The default for minimal length of subwords that get propagated to the output of this filterstatic final int
The default for minimal word length that gets decomposedprotected final CharArraySet
protected final int
protected final int
protected final int
protected final OffsetAttribute
protected final boolean
private final PositionIncrementAttribute
protected final CharTermAttribute
protected final LinkedList
<CompoundWordTokenFilterBase.CompoundToken> Fields inherited from class org.apache.lucene.analysis.TokenFilter
input
Fields inherited from class org.apache.lucene.analysis.TokenStream
DEFAULT_TOKEN_ATTRIBUTE_FACTORY
-
Constructor Summary
ConstructorsModifierConstructorDescriptionprotected
CompoundWordTokenFilterBase
(TokenStream input, CharArraySet dictionary) protected
CompoundWordTokenFilterBase
(TokenStream input, CharArraySet dictionary, boolean onlyLongestMatch) protected
CompoundWordTokenFilterBase
(TokenStream input, CharArraySet dictionary, int minWordSize, int minSubwordSize, int maxSubwordSize, boolean onlyLongestMatch) -
Method Summary
Modifier and TypeMethodDescriptionprotected abstract void
Decomposes the currenttermAtt
and placesCompoundWordTokenFilterBase.CompoundToken
instances in thetokens
list.final boolean
Consumers (i.e.,IndexWriter
) use this method to advance the stream to the next token.void
reset()
This method is called by a consumer before it begins consumption usingTokenStream.incrementToken()
.Methods inherited from class org.apache.lucene.analysis.TokenFilter
close, end, unwrap
Methods inherited from class org.apache.lucene.util.AttributeSource
addAttribute, addAttributeImpl, captureState, clearAttributes, cloneAttributes, copyTo, endAttributes, equals, getAttribute, getAttributeClassesIterator, getAttributeFactory, getAttributeImplsIterator, hasAttribute, hasAttributes, hashCode, reflectAsString, reflectWith, removeAllAttributes, restoreState, toString
-
Field Details
-
DEFAULT_MIN_WORD_SIZE
public static final int DEFAULT_MIN_WORD_SIZEThe default for minimal word length that gets decomposed- See Also:
-
DEFAULT_MIN_SUBWORD_SIZE
public static final int DEFAULT_MIN_SUBWORD_SIZEThe default for minimal length of subwords that get propagated to the output of this filter- See Also:
-
DEFAULT_MAX_SUBWORD_SIZE
public static final int DEFAULT_MAX_SUBWORD_SIZEThe default for maximal length of subwords that get propagated to the output of this filter- See Also:
-
dictionary
-
tokens
-
minWordSize
protected final int minWordSize -
minSubwordSize
protected final int minSubwordSize -
maxSubwordSize
protected final int maxSubwordSize -
onlyLongestMatch
protected final boolean onlyLongestMatch -
termAtt
-
offsetAtt
-
posIncAtt
-
current
-
-
Constructor Details
-
CompoundWordTokenFilterBase
protected CompoundWordTokenFilterBase(TokenStream input, CharArraySet dictionary, boolean onlyLongestMatch) -
CompoundWordTokenFilterBase
-
CompoundWordTokenFilterBase
protected CompoundWordTokenFilterBase(TokenStream input, CharArraySet dictionary, int minWordSize, int minSubwordSize, int maxSubwordSize, boolean onlyLongestMatch)
-
-
Method Details
-
incrementToken
Description copied from class:TokenStream
Consumers (i.e.,IndexWriter
) use this method to advance the stream to the next token. Implementing classes must implement this method and update the appropriateAttributeImpl
s with the attributes of the next token.The producer must make no assumptions about the attributes after the method has been returned: the caller may arbitrarily change it. If the producer needs to preserve the state for subsequent calls, it can use
AttributeSource.captureState()
to create a copy of the current attribute state.This method is called for every token of a document, so an efficient implementation is crucial for good performance. To avoid calls to
AttributeSource.addAttribute(Class)
andAttributeSource.getAttribute(Class)
, references to allAttributeImpl
s that this stream uses should be retrieved during instantiation.To ensure that filters and consumers know which attributes are available, the attributes must be added during instantiation. Filters and consumers are not required to check for availability of attributes in
TokenStream.incrementToken()
.- Specified by:
incrementToken
in classTokenStream
- Returns:
- false for end of stream; true otherwise
- Throws:
IOException
-
decompose
protected abstract void decompose()Decomposes the currenttermAtt
and placesCompoundWordTokenFilterBase.CompoundToken
instances in thetokens
list. The original token may not be placed in the list, as it is automatically passed through this filter. -
reset
Description copied from class:TokenFilter
This method is called by a consumer before it begins consumption usingTokenStream.incrementToken()
.Resets this stream to a clean state. Stateful implementations must implement this method so that they can be reused, just as if they had been created fresh.
If you override this method, always call
super.reset()
, otherwise some internal state will not be correctly reset (e.g.,Tokenizer
will throwIllegalStateException
on further usage).NOTE: The default implementation chains the call to the input TokenStream, so be sure to call
super.reset()
when overriding this method.- Overrides:
reset
in classTokenFilter
- Throws:
IOException
-