It is also useful for doing things like entity extraction or proper noun analysis as part of the analysis workflow and saving off those tokens for use in another field.
TeeSinkTokenFilter source1 = new TeeSinkTokenFilter(new WhitespaceTokenizer(version, reader1)); TeeSinkTokenFilter.SinkTokenStream sink1 = source1.newSinkTokenStream(); TeeSinkTokenFilter.SinkTokenStream sink2 = source1.newSinkTokenStream(); TeeSinkTokenFilter source2 = new TeeSinkTokenFilter(new WhitespaceTokenizer(version, reader2)); source2.addSinkTokenStream(sink1); source2.addSinkTokenStream(sink2); TokenStream final1 = new LowerCaseFilter(version, source1); TokenStream final2 = source2; TokenStream final3 = new EntityDetect(sink1); TokenStream final4 = new URLDetect(sink2); d.add(new TextField("f1", final1, Field.Store.NO)); d.add(new TextField("f2", final2, Field.Store.NO)); d.add(new TextField("f3", final3, Field.Store.NO)); d.add(new TextField("f4", final4, Field.Store.NO));In this example,
sink1
and sink2
will both get tokens from both reader1
and reader2
after whitespace tokenizer and now we can further wrap any of these in extra analysis, and more "sources" can be inserted if desired. It is important, that tees are consumed before sinks (in the above example, the field names must be less the sink's field names). If you are not sure, which stream is consumed first, you can simply add another sink and then pass all tokens to the sinks at once using #consumeAllTokens. This TokenFilter is exhausted after this. In the above example, change the example above to: ... TokenStream final1 = new LowerCaseFilter(version, source1.newSinkTokenStream()); TokenStream final2 = source2.newSinkTokenStream(); sink1.consumeAllTokens(); sink2.consumeAllTokens(); ...In this case, the fields can be added in any order, because the sources are not used anymore and all sinks are ready.
Note, the EntityDetect and URLDetect TokenStreams are for the example and do not currently exist in Lucene.