Uses of Class org.apache.lucene.analysis.TokenStream

Uses in package org.apache.lucene.analysis.ru

Classes derived from org.apache.lucene.analysis.TokenStream

class
A RussianLetterTokenizer is a tokenizer that extends LetterTokenizer by additionally looking up letters in a given "russian charset".
class
Normalizes token text to lower case, analyzing given ("russian") charset.
class
A filter that stems Russian words.

Constructors with parameter type org.apache.lucene.analysis.TokenStream

Methods with return type org.apache.lucene.analysis.TokenStream

TokenStream
RussianAnalyzer.tokenStream(String fieldName, Reader reader)
Creates a TokenStream which tokenizes all the text in the provided Reader.

Uses in package org.apache.lucene.analysis.de

Classes derived from org.apache.lucene.analysis.TokenStream

class
A filter that stems German words.

Constructors with parameter type org.apache.lucene.analysis.TokenStream

Construct a token stream filtering the given input.
GermanStemFilter.GermanStemFilter(TokenStream in, Hashtable exclusiontable)
Builds a GermanStemFilter that uses an exclusiontable.
Builds a GermanStemFilter that uses an exclusiontable.

Methods with return type org.apache.lucene.analysis.TokenStream

TokenStream
GermanAnalyzer.tokenStream(String fieldName, Reader reader)
Creates a TokenStream which tokenizes all the text in the provided Reader.

Uses in package org.apache.lucene.analysis.standard

Classes derived from org.apache.lucene.analysis.TokenStream

class
Normalizes tokens extracted with StandardTokenizer.
class
A grammar-based tokenizer constructed with JavaCC.

Constructors with parameter type org.apache.lucene.analysis.TokenStream

Construct filtering in.

Methods with return type org.apache.lucene.analysis.TokenStream

TokenStream
StandardAnalyzer.tokenStream(String fieldName, Reader reader)
Constructs a StandardTokenizer filtered by a StandardFilter, a LowerCaseFilter and a StopFilter.

Uses in package org.apache.lucene.analysis

Classes derived from org.apache.lucene.analysis.TokenStream

class
An abstract base class for simple, character-oriented tokenizers.
class
A LetterTokenizer is a tokenizer that divides text at non-letters.
class
Normalizes token text to lower case.
class
LowerCaseTokenizer performs the function of LetterTokenizer and LowerCaseFilter together.
class
Transforms the token stream as per the Porter stemming algorithm.
class
Removes stop words from a token stream.
class
A TokenFilter is a TokenStream whose input is another token stream.
class
A Tokenizer is a TokenStream whose input is a Reader.
class
A WhitespaceTokenizer is a tokenizer that divides text at whitespace.

Constructors with parameter type org.apache.lucene.analysis.TokenStream

Construct a token stream filtering the given input.
Construct a token stream filtering the given input.
StopFilter.StopFilter(TokenStream in, Hashtable stopTable)
Constructs a filter which removes words from the input TokenStream that are named in the Hashtable.
StopFilter.StopFilter(TokenStream in, Set stopWords)
Constructs a filter which removes words from the input TokenStream that are named in the Set.
StopFilter.StopFilter(TokenStream in, String[] stopWords)
Constructs a filter which removes words from the input TokenStream that are named in the array of words.
Construct a token stream filtering the given input.

Fields of type org.apache.lucene.analysis.TokenStream

TokenStream
The source of tokens for this filter.

Methods with return type org.apache.lucene.analysis.TokenStream

TokenStream
Analyzer.tokenStream(Reader reader)
Creates a TokenStream which tokenizes all the text in the provided Reader.
TokenStream
Analyzer.tokenStream(String fieldName, Reader reader)
Creates a TokenStream which tokenizes all the text in the provided Reader.
TokenStream
PerFieldAnalyzerWrapper.tokenStream(String fieldName, Reader reader)
TokenStream
SimpleAnalyzer.tokenStream(String fieldName, Reader reader)
Creates a TokenStream which tokenizes all the text in the provided Reader.
TokenStream
StopAnalyzer.tokenStream(String fieldName, Reader reader)
Filters LowerCaseTokenizer with StopFilter.
TokenStream
WhitespaceAnalyzer.tokenStream(String fieldName, Reader reader)

Copyright © 2000-2007 Apache Software Foundation. All Rights Reserved.