官术网_书友最值得收藏!

Performing a search

Now that we have a Query object, we are ready to execute a search. We will leverage IndexSearcher from two recipes ago to perform a search.

Note that, by default, Lucene sorts results based on relevance. It has a scoring mechanism assigning a score to every matching document. This score is responsible for the sort order in search results. A score can be affected by the rules defined in the query string (for example, must match, AND operation, and so on). It can also be altered programmatically. We have set aside a chapter to explore the concept of scoring and how we can leverage it to customize a search engine.

How to do it...

Here is what we learned so far and put together into an executable program:

public class LuceneTest {
    public static void main(String[] args) throws IOException, ParseException {
        Analyzer analyzer = new StandardAnalyzer();
        Directory directory = new RAMDirectory();
        IndexWriterConfig config = new IndexWriterConfig(Version.LATEST, analyzer);
        IndexWriter indexWriter = new IndexWriter(directory, config);
        Document doc = new Document();
        String text = "Lucene is an Information Retrieval library written in Java";
        doc.add(new TextField("Content", text, Field.Store.YES));
        indexWriter.addDocument(doc);
        indexWriter.close();
        IndexReader indexReader = DirectoryReader.open(directory);
        IndexSearcher indexSearcher = new IndexSearcher(indexReader);
        QueryParser parser = new QueryParser( "Content", analyzer);
        Query query = parser.parse("Lucene");
        int hitsPerPage = 10;
        TopDocs docs = indexSearcher.search(query, hitsPerPage);
        ScoreDoc[] hits = docs.scoreDocs;
        int end = Math.min(docs.totalHits, hitsPerPage);
        System.out.print("Total Hits: " + docs.totalHits);
        System.out.print("Results: ");
        for (int i = 0; i < end; i++) {
            Document d = indexSearcher.doc(hits[i].doc);
            System.out.println("Content: " + d.get("Content");
        }
    }
}

How it works…

The preceding code sets up a StandardAnalyzer to analyze text, uses a RAMDirectory for index store, configures IndexWriter to put a piece of content into the index, and uses QueryParser to generate Query object, in order to perform a search. It also has a sample code that shows how to retrieve search results from TopDocs by displaying total hits, and shows matching documents by DocId.

Here is a diagram showing how the search portion works between components:

A search string enters into QueryParser.parse(String). QueryParser then uses an analyzer to process the search string to produce a set of tokens. The tokens are then mapped to the Query object, and get sent to IndexSearcher to execute a search. The search result returned by IndexSearcher is a TopDocs object where it contains statistics of total matches and DocIds of the matching documents.

Note that it is preferable to use the same analyzer for both indexing and searching to get the best results.

主站蜘蛛池模板: 怀远县| 金山区| 射阳县| 三亚市| 蒲江县| 凤翔县| 文安县| 女性| 墨脱县| 英德市| 方城县| 乾安县| 惠东县| 吉林市| 应用必备| 华亭县| 五大连池市| 海晏县| 柳州市| 台前县| 临西县| 沂水县| 蒲城县| 肥西县| 雅江县| 武安市| 莱芜市| 桓台县| 尉氏县| 云南省| 香河县| 西城区| 肥西县| 庆元县| 专栏| 兴安县| 信丰县| 余江县| 新泰市| 嘉善县| 英超|