Wraps another filters result and caches it.
This interface describes a character stream that maintains line and
column number positions of the characters.
This interface describes a character stream that maintains line and
column number positions of the characters.
An abstract base class for simple, character-oriented tokenizers.
Removes all entries from the PriorityQueue.
Sets the value of bit
to zero.
Returns a clone of this query.
Returns a clone of this stream.
Returns a clone of this query.
Closes the enumeration to further activity, freeing resources.
Closes the store to future operations.
Closes files associated with this index.
Note that the underlying IndexReader is not closed, if
IndexSearcher was constructed with IndexSearcher(IndexReader r).
Flushes all changes to an index and closes all associated files.
Closes the stream to futher operations.
Describe close
method here.
Closes this stream to further operations.
Closes the store to future operations.
Closes this stream to further operations.
Frees resources associated with this Searcher.
Frees resources associated with this Searcher.
Frees associated resources.
Closes the enumeration to further activity, freeing resources.
Close the input TokenStream.
By default, closes the input Reader.
Releases resources associated with this stream.
Called once for every non-zero scoring document, with the document number
and its score.
Expert: called when re-writing queries under MultiSearcher.
Expert: called when re-writing queries under MultiSearcher.
Expert: called when re-writing queries under MultiSearcher.
Expert: called when re-writing queries under MultiSearcher.
Commit changes resulting from delete, undeleteAll, or setNorm operations
Compares two ScoreDoc objects and returns a result indicating their
sort order.
Compares two terms, returning an integer which is less than zero iff this
term belongs after the argument, equal zero iff this term is equal to the
argument, and greater than zero iff this term belongs after the argument.
Implemented as overlap / maxOverlap
.
Computes a score factor based on the fraction of all query terms that a
document contains.
Returns the total number of one bits in this vector.
Creates a new, empty file in the directory with the given name.
Creates a new, empty file in the directory with the given name.
Creates a new, empty file in the directory with the given name.
Expert: Constructs an appropriate Weight implementation for this query.
Returns a Weight that applies the filter to the enclosed query's Weight.
Expert: Constructs an appropriate Weight implementation for this query.
Expert: Constructs an appropriate Weight implementation for this query.
Expert: Constructs an appropriate Weight implementation for this query.
Expert: Constructs an appropriate Weight implementation for this query.
Expert: Constructs an appropriate Weight implementation for this query.
This is the last token that has been consumed successfully.
This is the last token that has been consumed successfully.
Sort using a custom Comparator.
Provides support for converting dates to strings and vice-versa.
A Filter that restricts search results to a range of time.
Constructs a filter for field f
matching dates
between from
and to
inclusively.
Constructs a filter for field f
matching times
between from
and to
inclusively.
Converts a Date to a string suitable for indexing.
Decodes a normalization factor stored in an index.
Expert: The cache used internally by sorting and range query classes.
Default value is Integer.MAX_VALUE
.
Expert: Default scoring implementation.
Deletes the document numbered docNum
.
Deletes all documents containing term
.
Removes an existing file in the directory.
Removes an existing file in the directory.
Removes an existing file in the directory.
Equality measure on the term
A Directory is a flat list of files.
Returns the directory this index resides in.
Code to execute with exclusive access.
Expert: A hit document's number.
DOC - static field in class org.apache.lucene.search.
SortField Sort by document number (index order).
Describe doc
method here.
doc() - method in class org.apache.lucene.search.
Scorer Returns the current document number.
doc() - method in class org.apache.lucene.search.spans.
Spans Returns the document number of the current match.
Returns the current document number.
Returns the stored fields of the nth document in this set.
Expert: Returns the stored fields of document i
.
Expert: Returns the stored fields of document i
.
Returns the number of documents currently in this index.
Returns the docFreq of the current Term in the enumeration.
Returns the docFreq of the current Term in the enumeration.
Returns the number of documents containing the term t
.
TODO: parallelize this one too
Expert: Returns the number of documents containing term
.
Expert: Returns the number of documents containing term
.
Documents are the unit of indexing and search.
Constructs a new document with no fields.
Returns the stored fields of the n
th
Document
in this index.
Implements deletion of the document numbered docNum
.
The lexer calls this function to indicate that it is done with the stream
and hence implementations can free any resources held by this class.
The lexer calls this function to indicate that it is done with the stream
and hence implementations can free any resources held by this class.
The lexer calls this function to indicate that it is done with the stream
and hence implementations can free any resources held by this class.
Implements setNorm in subclass.
Implements actual undeleteAll() in subclass.
Encodes a normalization factor for storage in an index.
end() - method in class org.apache.lucene.search.spans.
Spans Returns the end position of the current match.
beginLine and beginColumn describe the position of the first character
of this token; endLine and endColumn describe the position of the
last character of this token.
beginLine and beginColumn describe the position of the first character
of this token; endLine and endColumn describe the position of the
last character of this token.
Indiciates the end of the enumeration has been reached
endLine - field in class org.apache.lucene.analysis.standard.
Token beginLine and beginColumn describe the position of the first character
of this token; endLine and endColumn describe the position of the
last character of this token.
endLine - field in class org.apache.lucene.queryParser.
Token beginLine and beginColumn describe the position of the first character
of this token; endLine and endColumn describe the position of the
last character of this token.
Returns this Token's ending offset, one greater than the position of the
last character corresponding to this token in the source text.
An array containing some common English words that are not usually useful
for searching.
The end of line string for this machine.
The end of line string for this machine.
Returns true iff o
is equal to this.
Returns true iff o
is equal to this.
Returns true iff o
is equal to this.
Returns true iff o
is equal to this.
Compares two terms, returning true iff they have the same
field and text.
Returns true iff o
is equal to this.
Returns a String where those characters that QueryParser
expects to be escaped are escaped, i.e.
Each entry in this array is an array of integers.
Each entry in this array is an array of integers.
Returns an explanation of the score for doc
.
An explanation of the score computation for the named document.
Returns an Explanation that describes how doc
scored against
query
.
Returns an Explanation that describes how doc
scored against
query
.
Expert: Describes the score computation for document and query.
An efficient implementation of JavaCC's CharStream interface.
An efficient implementation of JavaCC's CharStream interface.
Constructs from a Reader.
Constructs from a Reader.
A field is a section of a Document.
Returns the field of this term, an interned string.
Create a field by specifying all parameters except for storeTermVector
,
which is set to false
.
Represents sorting by document number (index order).
Represents sorting by document score (relevancy).
Expert: Maintains caches of term values.
Expert: A ScoreDoc which also contains information about
how to sort the referenced document.
Expert: Creates one of these objects with empty sort information.
Expert: Creates one of these objects with the given sort information.
Expert: The values which are used to sort the referenced document.
The fields which were used to sort results by.
Returns an Enumeration of all the fields in a document.
Returns true iff a file with the given name exists.
Returns true iff a file with the given name exists.
Returns true iff the named file exists in this directory.
Returns the length of a file in the directory.
Returns the length in bytes of a file in the directory.
Returns the length in bytes of a file in the directory.
Returns the time the named file was last modified.
Returns the time the named file was last modified.
Returns the time the named file was last modified.
Returns the time the named file was last modified.
Abstract base class providing a mechanism to restrict searches to a subset
of an index.
A query that applies a filter to the results of another query.
Constructs a new query which applies a filter to the results of the original query.
Abstract class for enumerating a subset of all terms.
A FilterIndexReader
contains another IndexReader, which it
uses as its basic source of data, possibly transforming the data along the
way or providing additional functionality.
Construct a FilterIndexReader based on the specified base reader.
Base class for filtering
TermDocs
implementations.
Base class for filtering
TermEnum
implementations.
Release the write lock, if needed.
Release the write lock, if needed.
Sort using term values as encoded Floats.
Forces any buffered output to be written.
Expert: implements buffer write.
Expert: implements buffer write.
Describe freq
method here.
Returns the frequency of the term within the current document.
Straightforward implementation of
Directory
as a directory of files.
Implements the fuzzy search query.
Create a new FuzzyQuery that will match terms with a similarity
of at least minimumSimilarity
to term
.
Subclass of FilteredTermEnum for enumerating all terms that are similiar to the specified filter term.
Empty prefix and minSimilarity of 0.5f are used.
This is the standard FuzzyTermEnum with an empty prefix.
Constructor for enumeration of all terms from specified reader
which share a prefix of
length prefixLength
with term
and which have a fuzzy similarity >
minSimilarity
.
Analyzer for German language.
Builds an analyzer with the given stop words.
Builds an analyzer with the given stop words.
Builds an analyzer with the given stop words.
A filter that stems German words.
Construct a token stream filtering the given input.
Builds a GermanStemFilter that uses an exclusiontable.
Builds a GermanStemFilter that uses an exclusiontable.
A stemmer for German words.
Returns true
if bit
is one and
false
if it is zero.
Returns the string value of the field with the given name if any exist in
this document, or null.
Returns the analyzer used by this index.
Checks the internal cache for an appropriate entry, and if
none is found reads field
to see if it contains integers, floats
or strings, and then calls one of the other methods in this class to get the
values.
Returns the column number of the first character for current token (being
matched after the last call to BeginTOken).
Returns the column number of the first character for current token (being
matched after the last call to BeginTOken).
Returns the column number of the first character for current token (being
matched after the last call to BeginTOken).
Returns the line number of the first character for current token (being
matched after the last call to BeginTOken).
Returns the line number of the first character for current token (being
matched after the last call to BeginTOken).
Returns the line number of the first character for current token (being
matched after the last call to BeginTOken).
Factory method for generating query, given a set of clauses.
Returns the boost factor for hits on any field of this document.
Returns the boost factor for hits on any field of this document.
Gets the boost for this clause.
Returns the set of clauses in this query.
Return the clauses whose spans are matched.
Return the clauses whose spans are matched.
Returns the column position of the character last read.
Returns the column position of the character last read.
Returns the column position of the character last read.
Returns an object which, when sorted according to natural order,
will order the Term values in the correct order.
Reads version number from segments files.
Reads version number from segments files.
Reads version number from segments files.
Checks the internal cache for an appropriate entry, and if none
is found reads the terms out of field
and calls the given SortComparator
to get the sort values.
Return the default Similarity implementation used by indexing and search
code.
A description of this explanation node.
The sub-nodes of this explanation node.
Returns the directory instance for the named location.
Returns the directory instance for the named location.
Return the maximum end position permitted in a match.
Returns the column number of the last character for current token (being
matched after the last call to BeginTOken).
Returns the column number of the last character for current token (being
matched after the last call to BeginTOken).
Returns the column number of the last character for current token (being
matched after the last call to BeginTOken).
Returns the line number of the last character for current token (being
matched after the last call to BeginTOken).
Returns the line number of the last character for current token (being
matched after the last call to BeginTOken).
Returns the line number of the last character for current token (being
matched after the last call to BeginTOken).
Construct the enumeration to be used, expanding the pattern term.
Construct the enumeration to be used, expanding the pattern term.
Construct the enumeration to be used, expanding the pattern term.
Return the SpanQuery whose matches must not overlap those returned.
Returns the field name for this query
Returns the name of the field.
Returns the name of the field matched by this query.
Returns the name of the field matched by this query.
Returns the name of the field matched by this query.
Returns the name of the field matched by this query.
Returns the name of the field matched by this query.
Returns a field with the given name if any exist in this document, or
null.
Returns a list of all unique field names that exist in the index pointed
to by this IndexReader.
Returns a list of all unique field names that exist in the index pointed
to by this IndexReader.
Note that parameter analyzer is ignored.
Returns an array of
Field
s with the given name.
Returns the current position in this file, where the next read will
occur.
Returns the current position in this file, where the next write will
occur.
Checks the internal cache for an appropriate entry, and if
none is found, reads the terms in field
as floats and returns an array
of size reader.maxDoc()
of the value each document
has in the given field.
Get the default minimal similarity for fuzzy queries.
Returns a string made up of characters from the marked token beginning
to the current buffer position.
Returns a string made up of characters from the marked token beginning
to the current buffer position.
Returns a string made up of characters from the marked token beginning
to the current buffer position.
Return the SpanQuery whose matches are filtered.
Checks the internal cache for an appropriate entry, and if none is
found, reads the terms in field
as integers and returns an array
of size reader.maxDoc()
of the value each document
has in the given field.
Returns the line number of the character last read.
Returns the line number of the character last read.
Returns the line number of the character last read.
Returns current locale, allowing access by subclasses.
Returns the Locale by which term values are interpreted.
Returns the lower term of this range query
Return the SpanQuery whose matches are filtered.
Return the maximum number of clauses permitted, 1024 by default.
This method has the standard behavior when this object has been
created using the standard constructors.
This method has the standard behavior when this object has been
created using the standard constructors.
You can also modify the body of this method to customize your error messages.
You can also modify the body of this method to customize your error messages.
Returns the minimum similarity that is required for this query to match.
Gets implicit operator setting, which will be either DEFAULT_OPERATOR_AND
or DEFAULT_OPERATOR_OR.
Gets the default slop for phrases.
Returns the position increment of this Token.
Returns the relative positions of terms in this phrase.
Returns the relative positions of terms in this phrase.
Returns the prefix of this query.
Returns the prefix length, i.e.
The query that this concerns.
Note that parameter analyzer is ignored.
Returns whether the sort should be reversed.
Expert: Return the Similarity implementation used by this IndexWriter.
Returns the Similarity implementation used by this scorer.
Expert: Return the Similarity implementation used by this Searcher.
Expert: Returns the Similarity implementation to be used for this query.
Sets the phrase slop for this query.
Return the maximum number of intervening unmatched positions permitted.
Expert: Returns the matches for this query in an index.
Expert: Returns the matches for this query in an index.
Expert: Returns the matches for this query in an index.
Expert: Returns the matches for this query in an index.
Expert: Returns the matches for this query in an index.
Checks the internal cache for an appropriate entry, and if none
is found reads the term values in field
and returns
an array of them in natural order, along with an array telling
which element in the term array each document uses.
Checks the internal cache for an appropriate entry, and if none
is found, reads the term values in field
and returns an array
of size reader.maxDoc()
containing the value each document
has in the given field.
Returns an array of characters that make up the suffix of length 'len' for
the currently matched token.
Returns an array of characters that make up the suffix of length 'len' for
the currently matched token.
Returns an array of characters that make up the suffix of length 'len' for
the currently matched token.
Returns the pattern term.
Return the term whose spans are matched.
Returns the term of this query.
Array of term frequencies.
Return a term frequency vector for the specified document and field.
Return an array of term frequency vectors for the specified document.
Return an array of term frequency vectors for the specified document.
Returns an array of positions in which the term is found.
Returns the set of terms in this phrase.
Returns a collection of all terms matched by this query.
Returns a collection of all terms matched by this query.
Returns a collection of all terms matched by this query.
Returns a collection of all terms matched by this query.
Returns a collection of all terms matched by this query.
Returns the type of contents in the field.
Returns the upper term of this range query
Setting to turn on usage of a compound file.
The value assigned to this explanation node.
The weight for this query.
Returns an array of values of the field specified as the method parameter.
Factory method for generating a query.
Loads a text file and adds every line as an entry to a HashSet (omitting
leading and trailing whitespace).
id(int) - method in class org.apache.lucene.search.
Hits Returns the id for the nth document in this set.
Computes a score factor for a phrase.
Implemented as log(numDocs/(docFreq+1)) + 1
.
Computes a score factor based on a term's document frequency (the number
of documents which contain the term).
Computes a score factor for a simple term.
image - field in class org.apache.lucene.analysis.standard.
Token The string image of the token.
image - field in class org.apache.lucene.queryParser.
Token The string image of the token.
Just like indexOf(int)
but searches for a number of terms
at the same time.
Returns true
if an index exists at the specified directory.
Returns true
if an index exists at the specified directory.
Returns true
if an index exists at the specified directory.
Return an index in the term numbers array returned from
getTerms
at which the term with the specified
term
appears.
Special comparator for sorting hits according to index order (document number).
Represents sorting by index order.
IndexReader is an abstract class, providing an interface for accessing an
index.
Constructor used if IndexReader is not owner of its directory.
Implements search over a single IndexReader.
Creates a searcher searching the provided index.
Creates a searcher searching the index in the provided directory.
Creates a searcher searching the index in the named directory.
An IndexWriter creates and maintains an index.
Constructs an IndexWriter for the index in path
.
Constructs an IndexWriter for the index in d
.
Constructs an IndexWriter for the index in path
.
If non-null, information about merges will be printed to this.
Subclass constructors must call this.
The source of tokens for this filter.
The text source for this Tokenizer.
Abstract base class for input from a file in a
Directory
.
Adds element to the PriorityQueue in log(size) time if either
the PriorityQueue is not full, or not lessThan(element, top()).
INT - static field in class org.apache.lucene.search.
SortField Sort using term values as encoded Integers.
Returns true if document n has been deleted
Returns true
if the range query is inclusive
True iff the value of the field is to be indexed, so that it may be
searched on.
Return true if matches are required to be in-order.
Returns true if the resource is currently locked.
Returns true
iff the index in the named directory is
currently locked.
Returns true
iff the index in the named directory is
currently locked.
True iff the value of the field is to be stored in the index for return
with search hits.
Returns true iff a character should be included in a token.
Collects only characters which satisfy
Character.isLetter(char)
.
Collects only characters which satisfy
Character.isLetter(char)
.
Collects only characters which do not satisfy
Character.isWhitespace(char)
.
True iff the value of the field should be tokenized as text prior to
indexing.
Returns the time the index in the named directory was last modified.
Returns the time the index in the named directory was last modified.
Returns the time the index in the named directory was last modified.
Returns the total number of hits available in this set.
The number of bytes in the file.
The number of bytes in the file.
The number of bytes in the file.
Implemented as 1/sqrt(numTerms)
.
Computes the normalization value for a field given the total number of
terms contained in a field.
Determines the ordering of objects in this priority queue.
A LetterTokenizer is a tokenizer that divides text at non-letters.
Construct a new LetterTokenizer.
Returns a detailed message for the Error when it is thrown by the
token manager to indicate a lexical error.
Returns a detailed message for the Error when it is thrown by the
token manager to indicate a lexical error.
True iff running on Linux.
Returns an array of strings, one for each file in the directory.
Returns an array of strings, one for each file in the directory.
Returns an array of strings, one for each file in the directory.
Lock - class org.apache.lucene.store.
Lock An interprocess mutex lock.
Directory specified by org.apache.lucene.lockdir
or java.io.tmpdir
system property
All the term values, in natural order.
Normalizes token text to lower case.
Construct a token stream filtering the given input.
LowerCaseTokenizer performs the function of LetterTokenizer
and LowerCaseFilter together.
Construct a new LowerCaseTokenizer.
name() - method in class org.apache.lucene.document.
Field The name of the field (e.g., "date", "subject", "title", or "body")
as an interned string.
Creates a comparator for the field in the given index.
Creates a comparator for the field in the given index.
Returns a new Token object, by default.
Returns a new Token object, by default.
next - field in class org.apache.lucene.analysis.standard.
Token A reference to the next regular (non-special) token from the input
stream.
next - field in class org.apache.lucene.queryParser.
Token A reference to the next regular (non-special) token from the input
stream.
Returns the next token in the stream, or null at EOS.
Increments the enumeration to the next element.
Describe next
method here.
Returns the next input Token, after being stemmed
Returns the next token in the stream, or null at EOS.
Advance to the next document matching the query.
next() - method in class org.apache.lucene.search.spans.
Spans Move to the next match, returning true iff any such exists.
Returns the next token in the stream, or null at EOS.
Returns the next token in the stream, or null at EOS.
Returns the next input Token whose termText() is not a stop word.
Moves to the next pair in the enumeration.
Increments the enumeration to the next element.
Returns the next token in the stream, or null at EOS.
Describe nextPosition
method here.
Returns next position in the current document.
Called on each token character to normalize it before it is added to the
token.
Collects only characters which satisfy
Character.isLetter(char)
.
Assigns the query normalization factor to this.
Returns the byte-encoded normalization factor for the named field of
every document.
Reads the byte-encoded normalization factor for the named field of every
document.
Returns the number of documents in this index.
Creates a new
RAMDirectory
instance from the
FSDirectory
.
Creates a new RAMDirectory
instance from a different
Directory
implementation.
Creates a new
RAMDirectory
instance from the
FSDirectory
.
Construct an empty output buffer.
A Query that matches documents within an exclusive range.
Constructs a query selecting all terms greater than
lowerTerm
but less than upperTerm
.
Describe read
method here.
Attempts to read multiple entries from the enumeration, up to length of
docs.
Reads and returns a single byte.
Reads a specified number of bytes into an array at the specified offset.
Returns the next character from the selected input.
Returns the next character from the selected input.
Returns the next character from the selected input.
Reads UTF-8 encoded characters into an array.
The value of the field as a Reader, or null.
Reads four bytes and returns an int.
Expert: implements buffer refill.
Reads eight bytes and returns a long.
Reads an int stored in variable-length format.
Reads a long stored in variable-length format.
Releases exclusive access.
Special comparator for sorting hits according to computed relevance (document score).
Represents sorting by computed relevance.
A remote searchable implementation.
Constructs and exports a remote searcher.
Removes field with the specified name from the document.
Removes all fields with the given name from the document.
Renames an existing file in the directory.
Renames an existing file in the directory.
Removes an existing file in the directory.
If true, documents documents which do not
match this sub-query will not match the boolean query.
Resets this to an empty buffer.
Expert: called to re-write queries into primitive queries.
Rewrites the wrapped query.
Expert: called to re-write queries into primitive queries.
Expert: called to re-write queries into primitive queries.
Expert: called to re-write queries into primitive queries.
FIXME: Describe rewrite
method here.
Expert: called to re-write queries into primitive queries.
Expert: called to re-write queries into primitive queries.
Analyzer for Russian language.
Builds an analyzer with the given stop words.
Builds an analyzer with the given stop words.
RussianCharsets class contains encodings schemes (charsets) and toLowerCase() method implementation
for russian characters in Unicode, KOI8 and CP1252.
A RussianLetterTokenizer is a tokenizer that extends LetterTokenizer by additionally looking up letters
in a given "russian charset".
Normalizes token text to lower case, analyzing given ("russian") charset.
A filter that stems Russian words.
Expert: The score of this document for the query.
Sort by document score (relevancy).
Returns the score of the current document.
Returns the score for the nth document in this set.
Scores all documents and passes them to a collector.
Expert: Returned by low-level search implementations.
Expert: Constructs a ScoreDoc.
Expert: Compares two ScoreDoc objects for sorting.
Expert: The top hits for the query.
Expert: Implements scoring for a class of queries.
Constructs a scorer for this.
Returns the documents matching query
.
Returns the documents matching query
and
filter
.
A search implementation which spans a new thread for each
Searchable, waits for each search to complete and merge
the results back together.
Expert: Low-level search implementation.
Expert: Low-level search implementation.
A search implementation allowing sorting which spans a new thread for each
Searchable, waits for each search to complete and merges
the results back together.
Expert: Low-level search implementation with arbitrary sorting.
Expert: Low-level search implementation with arbitrary sorting.
Returns documents matching query
and filter
,
sorted by sort
.
Returns documents matching query
sorted by
sort
.
The interface for search implementations.
An abstract base class for search implementations.
Sets current position in this file, where the next read will occur.
Sets current position in this file, where the next write will occur.
Sets current position in this file, where the next write will occur.
Describe seek
method here.
Sets this to the data for a term.
Sets this to the data for the current term in a
TermEnum
.
Sets this to the data for the current term in a
TermEnum
.
Sets the value of bit
to one.
Sets a boost factor for hits on any field of this document.
Sets the boost factor hits on this field.
Sets the boost for this query clause to b
.
Set the default Similarity implementation used by indexing and search
code.
Sets the description of this explanation node.
Set an alternative exclusion list for this filter.
Set an alternative exclusion list for this filter.
Set the default minimum similarity for fuzzy queries.
Set locale used by date range parsing.
Set the maximum number of clauses permitted.
Expert: Resets the normalization factor for the named field of the named
document.
Expert: Resets the normalization factor for the named field of the named
document.
Sets the boolean operator of the QueryParser.
Sets the default slop for phrases.
Set the position increment.
Expert: Set the Similarity implementation used by this IndexWriter.
Expert: Set the Similarity implementation used by this Searcher.
Sets the phrase slop for this query.
Sets the number of other words permitted between words in query phrase.
Sets the sort to the given criteria.
Sets the sort to the given criteria in succession.
Sets the sort to the terms in field
then by index order
(document number).
Sets the sort to the terms in field
possibly in reverse,
then by index order (document number).
Sets the sort to the terms in each field in succession.
Builds an exclusionlist from the words contained in the given file.
Builds an exclusionlist from a Hashtable.
Builds an exclusionlist from an array of Strings.
Set a alternative/custom GermanStemmer for this filter.
Set a alternative/custom RussianStemmer for this filter.
Setting to turn on usage of a compound file.
Sets the value assigned to this explanation node.
An Analyzer that filters LetterTokenizer with LowerCaseFilter.
Returns the number of bits in this vector.
Returns the number of elements currently stored in the PriorityQueue.
Describe skipTo
method here.
Skips to the first match beyond the current whose document number is
greater than or equal to target.
Skips to the first match beyond the current, whose document number is
greater than or equal to target.
Skips entries to the first beyond the current whose document number is
greater than or equal to target.
Skips terms to the first beyond the current whose value is
greater or equal to target.
Implemented as 1 / (distance + 1)
.
Computes the amount of a sloppy phrase match, based on an edit distance.
Sort - class org.apache.lucene.search.
Sort Encapsulates sort criteria for returned hits.
Sort() - constructor for class org.apache.lucene.search.
Sort Sorts by computed relevance.
Sorts by the criteria in the given SortField.
Sorts in succession by the criteria in each SortField.
Sorts by the terms in field
then by index order (document
number).
Sorts possibly in reverse by the terms in field
then by
index order (document number).
Sorts in succession by the terms in each field.
Abstract base class for sorting hits returned by a Query.
Expert: returns a comparator for sorting ScoreDocs.
Stores information about how to sort documents by terms in an individual
field.
Creates a sort by terms in the given field where the type of term value
is determined dynamically (
AUTO
).
Creates a sort, possibly in reverse, by terms in the given field where
the type of term value is determined dynamically (
AUTO
).
Creates a sort by terms in the given field with the type of term
values explicitly given.
Creates a sort, possibly in reverse, by terms in the given field with the
type of term values explicitly given.
Creates a sort by terms in the given field sorted
according to the given locale.
Creates a sort, possibly in reverse, by terms in the given field sorted
according to the given locale.
Creates a sort with a custom comparison function.
Creates a sort, possibly in reverse, with a custom comparison function.
Returns the type of sort.
Returns the value used to sort the given document.
Matches spans near the beginning of a field.
Construct a SpanFirstQuery matching spans in match
whose end
position is less than or equal to end
.
Matches spans which are near one another.
Construct a SpanNearQuery.
Removes matches which overlap with another SpanQuery.
Construct a SpanNotQuery matching spans from include
which
have no overlap with spans from exclude
.
Matches the union of its clauses.
Construct a SpanOrQuery merging the provided clauses.
Base class for span-based queries.
Spans - interface org.apache.lucene.search.spans.
Spans Expert: an enumeration of span matches.
Matches spans containing a term.
Construct a SpanTermQuery matching the named term's spans.
This variable determines which constructor was used to create
this object and thereby affects the semantics of the
"getMessage" method (see below).
This variable determines which constructor was used to create
this object and thereby affects the semantics of the
"getMessage" method (see below).
This field is used to access special tokens that occur prior to this
token, but after the immediately preceding regular (non-special) token.
This field is used to access special tokens that occur prior to this
token, but after the immediately preceding regular (non-special) token.
Builds an analyzer with the given stop words.
A grammar-based tokenizer constructed with JavaCC.
Constructs a tokenizer for this Reader.
start() - method in class org.apache.lucene.search.spans.
Spans Returns the start position of the current match.
Returns this Token's starting offset, the position of the first character
corresponding to this token in the source text.
Stemms the given term to an unique discriminator.
An array containing some common English words that are usually not
useful for searching.
Filters LetterTokenizer with LowerCaseFilter and StopFilter.
Builds an analyzer which removes words in ENGLISH_STOP_WORDS.
Builds an analyzer which removes words in the provided array.
Removes stop words from a token stream.
Constructs a filter which removes words from the input
TokenStream that are named in the Hashtable.
Constructs a filter which removes words from the input
TokenStream that are named in the Set.
Constructs a filter which removes words from the input
TokenStream that are named in the array of words.
Sort using term values as Strings.
Indicator for StringIndex values in the cache.
Compares two strings, character by character, and returns the
first position where the two strings differ from one another.
Methods for manipulating strings.
Expert: Stores term text values and document ordering data.
Creates one of these objects
Converts a string-encoded date into a Date object.
Converts a string-encoded date into a millisecond time.
The value of the field as a String, or null.
Returns the document number of document n
within its
sub-index.
Returns index of the searcher for document n
in the array
used to construct this searcher.
The sum of squared weights of contained query clauses.
True iff running on SunOS.
Term - class org.apache.lucene.index.
Term A Term represents a word from text.
Returns the current Term in the enumeration.
Returns the current Term in the enumeration.
Constructs a Term with the given field and text.
Equality compare on the term
The termCompare method in FuzzyTermEnum uses Levenshtein distance to
calculate the distance between the given term and the comparing term.
TermDocs provides an interface for enumerating <document, frequency>
pairs for a term.
Returns an unpositioned
TermDocs
enumerator.
Returns an enumeration of all the documents which contain
term
.
Abstract class for enumerating terms.
Provides access to stored term vector of
a document field.
TermPositions provides an interface for enumerating the <document,
frequency, <position>* > tuples for a term.
Returns an enumeration of all the documents which contain
term
.
Extends TermFreqVector
to provide additional information about
positions in which each of the terms is found.
A Query that matches documents containing a term.
Constructs a query for the term t
.
Returns an enumeration of all the terms in the index.
Returns an enumeration of all terms after a given term.
Returns the Token's term text.
text() - method in class org.apache.lucene.index.
Term Returns the text of this term.
Constructs a Reader-valued Field that is tokenized and indexed, but is
not stored in the index verbatim.
Constructs a Reader-valued Field that is tokenized and indexed, but is
not stored in the index verbatim.
Constructs a String-valued Field that is tokenized and indexed,
and is stored in the index, for return with hits.
Constructs a String-valued Field that is tokenized and indexed,
and is stored in the index, for return with hits.
Implemented as sqrt(freq)
.
Computes a score factor based on a term or phrase's frequency in a
document.
Computes a score factor based on a term or phrase's frequency in a
document.
Converts a millisecond time to a string suitable for indexing.
Render an explanation as HTML.
A Token is an occurence of a term from the text of a field.
Token - class org.apache.lucene.analysis.standard.
Token Describes the input token stream.
Describes the input token stream.
Constructs a Token with the given term text, and start & end offsets.
Constructs a Token with the given text, start and end offsets, & type.
A TokenFilter is a TokenStream whose input is another token stream.
Call TokenFilter(TokenStream) instead.
Construct a token stream filtering the given input.
This is a reference to the "tokenImage" array of the generated
parser within which the parse error occurred.
This is a reference to the "tokenImage" array of the generated
parser within which the parse error occurred.
A Tokenizer is a TokenStream whose input is a Reader.
Construct a tokenizer with null input.
Construct a token stream processing the given input.
A TokenStream enumerates the sequence of tokens, either from
fields of a document or from query text.
Creates a TokenStream which tokenizes all the text in the provided
Reader.
Creates a TokenStream which tokenizes all the text in the provided
Reader.
Creates a TokenStream which tokenizes all the text in the provided Reader.
Creates a TokenStream which tokenizes all the text in the provided Reader.
Creates a TokenStream which tokenizes all the text in the provided
Reader.
Filters LowerCaseTokenizer with StopFilter.
Thrown when an attempt is made to add more than getMaxClauseCount()
clauses.
Returns the least element of the PriorityQueue in constant time.
Expert: Returned by low-level search implementations.
Expert: Returned by low-level sorted search implementations.
Prints the fields of a document for human consumption.
Render an explanation as text.
Prints a Field for human consumption.
Prints a query to a string.
Prints a user-readable version of this query.
Prints a user-readable version of this query.
Prints a user-readable version of this query.
Prints a user-readable version of this query.
Prints a user-readable version of this query.
Prints a user-readable version of this query.
Prints a user-readable version of this query.
Prints a query to a string, with field
as the default field
for terms.
Prints a user-readable version of this query.
Prints a query to a string, with field
as the default field
for terms.
Prints a query to a string, with field
as the default field
for terms.
Prints a query to a string, with field
as the default field
for terms.
Prints a query to a string, with field
as the default field
for terms.
Prints a query to a string, with field
as the default field
for terms.
Prints a user-readable version of this query.
Expert: The total number of hits for the query.
Set the modified time of an existing file to now.
Set the modified time of an existing file to now.
Set the modified time of an existing file to now.
type() - method in class org.apache.lucene.analysis.
Token Returns this Token's lexical type.