using Lucene.Net.Index; using Lucene.Net.Store; IndexWriter writer = new IndexWriter(new SimpleFSDirectory(new DirectoryInfo(indexPath)), new StandardAnalyzer(LuceneVersion.LUCENE_48), IndexWriter.MaxFieldLength.UNLIMITED); writer.Optimize(); // Optimize the index writer.Close(); // Close the writer
using Lucene.Net.Index; using Lucene.Net.Store; IndexWriter writer = new IndexWriter(new SimpleFSDirectory(new DirectoryInfo(indexPath)), new StandardAnalyzer(LuceneVersion.LUCENE_48), IndexWriter.MaxFieldLength.UNLIMITED); // Add some documents to the index writer.Commit(); // Commit the changes writer.Optimize(); // Optimize the index writer.Close(); // Close the writerIn this example, documents are added to the index before calling the Commit() method. The Optimize() method is called after committing the changes to the index. Finally, the writer is closed. Package/Library: Lucene.Net.Index
It is recommended that this method be called upon completion of indexing. In environments with frequent updates, optimize is best done during low volume times, if at all.
See http://www.gossamer-threads.com/lists/lucene/java-dev/47895 for more discussion.
Note that optimize requires 2X the index size free space in your Directory (3X if you're using compound file format). For example, if your index size is 10 MB then you need 20 MB free for optimize to complete (30 MB if you're using compound fiel format).
If some but not all readers re-open while an optimize is underway, this will cause > 2X temporary space to be consumed as those new readers will then hold open the partially optimized segments at that time. It is best not to re-open readers while optimize is running.
The actual temporary usage could be much less than these figures (it depends on many factors).
In general, once the optimize completes, the total size of the index will be less than the size of the starting index. It could be quite a bit smaller (if there were many pending deletes) or just slightly smaller.
If an Exception is hit during optimize(), for example due to disk full, the index will not be corrupt and no documents will have been lost. However, it may have been partially optimized (some segments were merged but not all), and it's possible that one of the segments in the index will be in non-compound format even when using compound file format. This will occur when the Exception is hit during conversion of the segment into compound format.
This call will optimize those segments present in the index when the call started. If other threads are still adding documents and flushing segments, those newly created segments will not be optimized unless you call optimize again.
NOTE: if this method hits an OutOfMemoryError you should immediately close the writer. See above for details.
public Optimize ( ) : void | ||
return | void |