protected override bool IsIndexStale(IndexStats indexesStat, IStorageActionsAccessor actions, bool isIdle, Reference<bool> onlyFoundIdleWork) { var isStale = actions.Staleness.IsMapStale(indexesStat.Id); var indexingPriority = indexesStat.Priority; if (isStale == false) return false; if (indexingPriority == IndexingPriority.None) return true; if ((indexingPriority & IndexingPriority.Normal) == IndexingPriority.Normal) { onlyFoundIdleWork.Value = false; return true; } if ((indexingPriority & (IndexingPriority.Disabled | IndexingPriority.Error)) != IndexingPriority.None) return false; if (isIdle == false) return false; // everything else is only valid on idle runs if ((indexingPriority & IndexingPriority.Idle) == IndexingPriority.Idle) return true; if ((indexingPriority & IndexingPriority.Abandoned) == IndexingPriority.Abandoned) { var timeSinceLastIndexing = (SystemTime.UtcNow - indexesStat.LastIndexingTime); return (timeSinceLastIndexing > context.Configuration.TimeToWaitBeforeRunningAbandonedIndexes); } throw new InvalidOperationException("Unknown indexing priority for index " + indexesStat.Id + ": " + indexesStat.Priority); }
public override void IndexDocuments( AbstractViewGenerator viewGenerator, IEnumerable<dynamic> documents, WorkContext context, IStorageActionsAccessor actions, DateTime minimumTimestamp) { actions.Indexing.SetCurrentIndexStatsTo(name); var count = 0; Func<object, object> documentIdFetcher = null; var reduceKeys = new HashSet<string>(StringComparer.InvariantCultureIgnoreCase); var documentsWrapped = documents.Select(doc => { var documentId = doc.__document_id; foreach (var reduceKey in actions.MappedResults.DeleteMappedResultsForDocumentId((string)documentId, name)) { reduceKeys.Add(reduceKey); } return doc; }); foreach (var doc in RobustEnumeration(documentsWrapped, viewGenerator.MapDefinition, actions, context)) { count++; documentIdFetcher = CreateDocumentIdFetcherIfNeeded(documentIdFetcher, doc); var docIdValue = documentIdFetcher(doc); if (docIdValue == null) throw new InvalidOperationException("Could not find document id for this document"); var reduceValue = viewGenerator.GroupByExtraction(doc); if (reduceValue == null) { logIndexing.DebugFormat("Field {0} is used as the reduce key and cannot be null, skipping document {1}", viewGenerator.GroupByExtraction, docIdValue); continue; } var reduceKey = ReduceKeyToString(reduceValue); var docId = docIdValue.ToString(); reduceKeys.Add(reduceKey); var data = GetMapedData(doc); logIndexing.DebugFormat("Mapped result for '{0}': '{1}'", name, data); var hash = ComputeHash(name, reduceKey); actions.MappedResults.PutMappedResult(name, docId, reduceKey, data, hash); actions.Indexing.IncrementSuccessIndexing(); } actions.Tasks.AddTask(new ReduceTask { Index = name, ReduceKeys = reduceKeys.ToArray() }, minimumTimestamp); logIndexing.DebugFormat("Mapped {0} documents for {1}", count, name); }
public long GetNextIdentityValueWithoutOverwritingOnExistingDocuments(string key, IStorageActionsAccessor actions, TransactionInformation transactionInformation) { int tries; return GetNextIdentityValueWithoutOverwritingOnExistingDocuments(key, actions, transactionInformation, out tries); }
public void Delete(string fileName, IStorageActionsAccessor actionsAccessor = null) { RavenJObject metadata = null; Action<IStorageActionsAccessor> delete = accessor => { accessor.DeleteConfig(RavenFileNameHelper.ConflictConfigNameForFile(fileName)); metadata = accessor.GetFile(fileName, 0, 0).Metadata; metadata.Remove(SynchronizationConstants.RavenSynchronizationConflict); metadata.Remove(SynchronizationConstants.RavenSynchronizationConflictResolution); accessor.UpdateFileMetadata(fileName, metadata); }; if (actionsAccessor != null) { delete(actionsAccessor); } else { storage.Batch(delete); } if (metadata != null) { index.Index(fileName, metadata); } }
protected override DatabaseTask GetApplicableTask(IStorageActionsAccessor actions) { var removeFromIndexTasks = (DatabaseTask)actions.Tasks.GetMergedTask<RemoveFromIndexTask>(); var touchReferenceDocumentIfChangedTask = removeFromIndexTasks ?? actions.Tasks.GetMergedTask<TouchReferenceDocumentIfChangedTask>(); return touchReferenceDocumentIfChangedTask; }
private TimeSpan SynchronizationTimeout(IStorageActionsAccessor accessor) { var timeoutConfigExists = accessor.TryGetConfigurationValue( SynchronizationConstants.RavenSynchronizationLockTimeout, out configuredTimeout); return timeoutConfigExists ? configuredTimeout : defaultTimeout; }
public IndexingStorageActions(TableStorage tableStorage, IUuidGenerator generator, Reference<SnapshotReader> snapshot, Reference<WriteBatch> writeBatch, IStorageActionsAccessor storageActionsAccessor, IBufferPool bufferPool) : base(snapshot, bufferPool) { this.tableStorage = tableStorage; this.generator = generator; this.writeBatch = writeBatch; this.currentStorageActionsAccessor = storageActionsAccessor; }
public MappedResultsStorageActions(TableStorage tableStorage, IUuidGenerator generator, OrderedPartCollection<AbstractDocumentCodec> documentCodecs, Reference<SnapshotReader> snapshot, Reference<WriteBatch> writeBatch, IBufferPool bufferPool, IStorageActionsAccessor storageActionsAccessor) : base(snapshot, bufferPool) { this.tableStorage = tableStorage; this.generator = generator; this.documentCodecs = documentCodecs; this.writeBatch = writeBatch; this.storageActionsAccessor = storageActionsAccessor; }
private void ReplicateDocument(IStorageActionsAccessor actions, string id, RavenJObject metadata, RavenJObject document, string src) { var existingDoc = actions.Documents.DocumentByKey(id, null); if (existingDoc == null) { log.DebugFormat("New document {0} replicated successfully from {1}", id, src); actions.Documents.AddDocument(id, Guid.Empty, document, metadata); return; } var existingDocumentIsInConflict = existingDoc.Metadata[ReplicationConstants.RavenReplicationConflict] != null; if (existingDocumentIsInConflict == false && // if the current document is not in conflict, we can continue without having to keep conflict semantics (IsDirectChildOfCurrentDocument(existingDoc, metadata))) // this update is direct child of the existing doc, so we are fine with overwriting this { log.DebugFormat("Existing document {0} replicated successfully from {1}", id, src); actions.Documents.AddDocument(id, null, document, metadata); return; } var newDocumentConflictId = id + "/conflicts/" + metadata.Value<string>(ReplicationConstants.RavenReplicationSource) + "/" + metadata.Value<string>("@etag"); metadata.Add(ReplicationConstants.RavenReplicationConflict, RavenJToken.FromObject(true)); actions.Documents.AddDocument(newDocumentConflictId, null, document, metadata); if (existingDocumentIsInConflict) // the existing document is in conflict { log.DebugFormat("Conflicted document {0} has a new version from {1}, adding to conflicted documents", id, src); // just update the current doc with the new conflict document existingDoc.DataAsJson.Value<RavenJArray>("Conflicts").Add(RavenJToken.FromObject(newDocumentConflictId)); actions.Documents.AddDocument(id, existingDoc.Etag, existingDoc.DataAsJson, existingDoc.Metadata); return; } log.DebugFormat("Existing document {0} is in conflict with replicated version from {1}, marking document as conflicted", id, src); // we have a new conflict // move the existing doc to a conflict and create a conflict document var existingDocumentConflictId = id + "/conflicts/" + Database.TransactionalStorage.Id + "/" + existingDoc.Etag; existingDoc.Metadata.Add(ReplicationConstants.RavenReplicationConflict, RavenJToken.FromObject(true)); actions.Documents.AddDocument(existingDocumentConflictId, null, existingDoc.DataAsJson, existingDoc.Metadata); actions.Documents.AddDocument(id, null, new RavenJObject { { "Conflicts", new RavenJArray(existingDocumentConflictId, newDocumentConflictId) } }, new RavenJObject { {ReplicationConstants.RavenReplicationConflict, true}, {"@Http-Status-Code", 409}, {"@Http-Status-Description", "Conflict"} }); }
private TimeSpan SynchronizationTimeout(IStorageActionsAccessor accessor) { string timeoutConfigKey = string.Empty; accessor.TryGetConfigurationValue<string>(SynchronizationConstants.RavenSynchronizationLockTimeout, out timeoutConfigKey); TimeSpan timeoutConfiguration; if (TimeSpan.TryParse(timeoutConfigKey, out timeoutConfiguration)) return timeoutConfiguration; return defaultTimeout; }
public MappedResultsStorageActions(TableStorage tableStorage, IUuidGenerator generator, OrderedPartCollection<AbstractDocumentCodec> documentCodecs, Reference<SnapshotReader> snapshot, Reference<WriteBatch> writeBatch, IBufferPool bufferPool, IStorageActionsAccessor storageActionsAccessor, ConcurrentDictionary<int, RemainingReductionPerLevel> ScheduledReductionsPerViewAndLevel) : base(snapshot, bufferPool) { this.tableStorage = tableStorage; this.generator = generator; this.documentCodecs = documentCodecs; this.writeBatch = writeBatch; this.storageActionsAccessor = storageActionsAccessor; this.scheduledReductionsPerViewAndLevel = ScheduledReductionsPerViewAndLevel; }
public void LockByCreatingSyncConfiguration(string fileName, FileSystemInfo sourceFileSystem, IStorageActionsAccessor accessor) { var syncLock = new SynchronizationLock { SourceFileSystem = sourceFileSystem, FileLockedAt = DateTime.UtcNow }; accessor.SetConfig(RavenFileNameHelper.SyncLockNameForFile(fileName), JsonExtensions.ToJObject(syncLock)); log.Debug("File '{0}' was locked", fileName); }
public override void IndexDocuments( AbstractViewGenerator viewGenerator, IEnumerable<object> documents, WorkContext context, IStorageActionsAccessor actions) { actions.Indexing.SetCurrentIndexStatsTo(name); var count = 0; Write(indexWriter => { bool madeChanges = false; PropertyDescriptorCollection properties = null; var processedKeys = new HashSet<string>(); var documentsWrapped = documents.Select((dynamic doc) => { var documentId = doc.__document_id.ToString(); if (processedKeys.Add(documentId) == false) return doc; madeChanges = true; context.IndexUpdateTriggers.Apply(trigger => trigger.OnIndexEntryDeleted(name, documentId)); indexWriter.DeleteDocuments(new Term("__document_id", documentId)); return doc; }); foreach (var doc in RobustEnumeration(documentsWrapped, viewGenerator.MapDefinition, actions, context)) { count++; string newDocId; IEnumerable<AbstractField> fields; if (doc is DynamicJsonObject) fields = ExtractIndexDataFromDocument((DynamicJsonObject) doc, out newDocId); else fields = ExtractIndexDataFromDocument(properties, doc, out newDocId); if (newDocId != null) { var luceneDoc = new Document(); luceneDoc.Add(new Field("__document_id", newDocId, Field.Store.YES, Field.Index.NOT_ANALYZED)); madeChanges = true; CopyFieldsToDocument(luceneDoc, fields); context.IndexUpdateTriggers.Apply(trigger => trigger.OnIndexEntryCreated(name, newDocId, luceneDoc)); log.DebugFormat("Index '{0}' resulted in: {1}", name, luceneDoc); indexWriter.AddDocument(luceneDoc); } actions.Indexing.IncrementSuccessIndexing(); } return madeChanges; }); log.DebugFormat("Indexed {0} documents for {1}", count, name); }
public void LockByCreatingSyncConfiguration(string fileName, ServerInfo sourceServer, IStorageActionsAccessor accessor) { var syncLock = new SynchronizationLock { SourceServer = sourceServer, FileLockedAt = DateTime.UtcNow }; accessor.SetConfig(RavenFileNameHelper.SyncLockNameForFile(fileName), syncLock.AsConfig()); log.Debug("File '{0}' was locked", fileName); }
private void ReplicateAttachment(IStorageActionsAccessor actions, string id, JObject metadata, byte[] data, Guid lastEtag ,string src) { var existingAttachment = actions.Attachments.GetAttachment(id); if (existingAttachment == null) { log.DebugFormat("New attachment {0} replicated successfully from {1}", id, src); actions.Attachments.AddAttachment(id, Guid.Empty, data, metadata); return; } var existingDocumentIsInConflict = existingAttachment.Metadata[ReplicationConstants.RavenReplicationConflict] != null; if (existingDocumentIsInConflict == false && // if the current document is not in conflict, we can continue without having to keep conflict semantics (IsDirectChildOfCurrentAttachment(existingAttachment, metadata))) // this update is direct child of the existing doc, so we are fine with overwriting this { log.DebugFormat("Existing document {0} replicated successfully from {1}", id, src); actions.Attachments.AddAttachment(id, null, data, metadata); return; } var newDocumentConflictId = id + "/conflicts/" + metadata.Value<string>(ReplicationConstants.RavenReplicationSource) + "/" + lastEtag; metadata.Add(ReplicationConstants.RavenReplicationConflict, JToken.FromObject(true)); actions.Attachments.AddAttachment(newDocumentConflictId, null, data, metadata); if (existingDocumentIsInConflict) // the existing document is in conflict { log.DebugFormat("Conflicted document {0} has a new version from {1}, adding to conflicted documents", id, src); // just update the current doc with the new conflict document existingAttachment.Metadata.Value<JArray>("Conflicts").Add(JToken.FromObject(newDocumentConflictId)); actions.Attachments.AddAttachment(id, existingAttachment.Etag, existingAttachment.Data, existingAttachment.Metadata); return; } log.DebugFormat("Existing document {0} is in conflict with replicated version from {1}, marking document as conflicted", id, src); // we have a new conflict // move the existing doc to a conflict and create a conflict document var existingDocumentConflictId = id + "/conflicts/" + Database.TransactionalStorage.Id + "/" + existingAttachment.Etag; existingAttachment.Metadata.Add(ReplicationConstants.RavenReplicationConflict, JToken.FromObject(true)); actions.Attachments.AddAttachment(existingDocumentConflictId, null, existingAttachment.Data, existingAttachment.Metadata); actions.Attachments.AddAttachment(id, null, new JObject( new JProperty("Conflicts", new JArray(existingDocumentConflictId, newDocumentConflictId))).ToBytes(), new JObject( new JProperty(ReplicationConstants.RavenReplicationConflict, true), new JProperty("@Http-Status-Code", 409), new JProperty("@Http-Status-Description", "Conflict") )); }
public static SynchronizationConfig GetOrDefault(IStorageActionsAccessor accessor) { try { if (accessor.ConfigExists(SynchronizationConstants.RavenSynchronizationConfig) == false) return new SynchronizationConfig(); // return a default one return accessor.GetConfig(SynchronizationConstants.RavenSynchronizationConfig).JsonDeserialization<SynchronizationConfig>(); } catch (Exception e) { Log.Warn("Could not deserialize a synchronization configuration", e); return new SynchronizationConfig(); // return a default one } }
public bool TimeoutExceeded(string fileName, IStorageActionsAccessor accessor) { SynchronizationLock syncLock; try { syncLock = accessor.GetConfig(RavenFileNameHelper.SyncLockNameForFile(fileName)).JsonDeserialization<SynchronizationLock>(); } catch (FileNotFoundException) { return true; } return (DateTime.UtcNow - syncLock.FileLockedAt).TotalMilliseconds > SynchronizationConfigAccessor.GetOrDefault(accessor).SynchronizationLockTimeoutMiliseconds; }
public bool TimeoutExceeded(string fileName, IStorageActionsAccessor accessor) { SynchronizationLock syncLock; try { syncLock = accessor.GetConfig(RavenFileNameHelper.SyncLockNameForFile(fileName)).AsObject<SynchronizationLock>(); } catch (FileNotFoundException) { return true; } return DateTime.UtcNow - syncLock.FileLockedAt > SynchronizationTimeout(accessor); }
private static bool TryGetDeserializedConfig(IStorageActionsAccessor accessor, string configurationName, out FileVersioningConfiguration fileVersioningConfiguration) { if (accessor.ConfigExists(configurationName) == false) { fileVersioningConfiguration = null; return false; } var configuration = accessor.GetConfig(configurationName); if (configuration == null) { fileVersioningConfiguration = null; return false; } fileVersioningConfiguration = configuration.JsonDeserialization<FileVersioningConfiguration>(); return true; }
/// <summary> /// We need to NOT remove documents that has been removed then added. /// We DO remove documents that would be filtered out because of an Entity Name changed, though. /// </summary> private bool FilterDocuments(WorkContext context, IStorageActionsAccessor accessor, string key) { var documentMetadataByKey = accessor.Documents.DocumentMetadataByKey(key, null); if (documentMetadataByKey == null) return true; var generator = context.IndexDefinitionStorage.GetViewGenerator(Index); if (generator == null) return false; if (generator.ForEntityNames.Count == 0) return false;// there is a new document and this index applies to it var entityName = documentMetadataByKey.Metadata.Value<string>(Constants.RavenEntityName); if (entityName == null) return true; // this document doesn't belong to this index any longer, need to remove it return generator.ForEntityNames.Contains(entityName) == false; }
public bool IndexDocuments(IStorageActionsAccessor actions, string index, Guid etagToIndexFrom) { log.DebugFormat("Indexing documents for {0}, etag to index from: {1}", index, etagToIndexFrom); var viewGenerator = context.IndexDefinitionStorage.GetViewGenerator(index); if (viewGenerator == null) return false; // index was deleted, probably var jsonDocs = actions.Documents.GetDocumentsAfter(etagToIndexFrom) .Where(x => x != null) .Take(10000) // ensure that we won't go overboard with reading and blow up with OOM .ToArray(); if(jsonDocs.Length == 0) return false; var dateTime = jsonDocs.Select(x=>x.LastModified).Min(); var documentRetriever = new DocumentRetriever(null, context.ReadTriggers); try { log.DebugFormat("Indexing {0} documents for index: {1}", jsonDocs.Length, index); context.IndexStorage.Index(index, viewGenerator, jsonDocs .Select(doc => documentRetriever.ProcessReadVetoes(doc, null, ReadOperation.Index)) .Where(doc => doc != null) .Select(x => JsonToExpando.Convert(x.ToJson())), context, actions, dateTime); return true; } catch (Exception e) { log.WarnFormat(e, "Failed to index documents for index: {0}", index); return false; } finally { // whatever we succeeded in indexing or not, we have to update this // because otherwise we keep trying to re-index failed documents var last = jsonDocs.Last(); actions.Indexing.UpdateLastIndexed(index, last.Etag, last.LastModified); } }
public long GetNextIdentityValueWithoutOverwritingOnExistingDocuments(string key, IStorageActionsAccessor actions, TransactionInformation transactionInformation, out int tries) { long nextIdentityValue = actions.General.GetNextIdentityValue(key); if (actions.Documents.DocumentMetadataByKey(key + nextIdentityValue, transactionInformation) == null) { tries = 1; return nextIdentityValue; } tries = 1; // there is already a document with this id, this means that we probably need to search // for an opening in potentially large data set. var lastKnownBusy = nextIdentityValue; var maybeFree = nextIdentityValue * 2; var lastKnownFree = long.MaxValue; while (true) { tries++; if (actions.Documents.DocumentMetadataByKey(key + maybeFree, transactionInformation) == null) { if (lastKnownBusy + 1 == maybeFree) { actions.General.SetIdentityValue(key, maybeFree); return maybeFree; } lastKnownFree = maybeFree; maybeFree = Math.Max(maybeFree - (maybeFree - lastKnownBusy) / 2, lastKnownBusy + 1); } else { lastKnownBusy = maybeFree; maybeFree = Math.Min(lastKnownFree, maybeFree * 2); } } }
public abstract void IndexDocuments(AbstractViewGenerator viewGenerator, IEnumerable <object> documents, WorkContext context, IStorageActionsAccessor actions, DateTime minimumTimestamp);
protected abstract bool IsIndexStale(IndexStats indexesStat, IStorageActionsAccessor actions);
public override void IndexDocuments( AbstractViewGenerator viewGenerator, IndexingBatch batch, WorkContext context, IStorageActionsAccessor actions, DateTime minimumTimestamp) { var count = 0; var sourceCount = 0; var sw = Stopwatch.StartNew(); var start = SystemTime.UtcNow; var changed = new HashSet <ReduceKeyAndBucket>(); var documentsWrapped = batch.Docs.Select(doc => { sourceCount++; var documentId = doc.__document_id; actions.MapReduce.DeleteMappedResultsForDocumentId((string)documentId, name, changed); return(doc); }) .Where(x => x is FilteredDocument == false); var stats = new IndexingWorkStats(); foreach ( var mappedResultFromDocument in GroupByDocumentId(context, RobustEnumerationIndex(documentsWrapped.GetEnumerator(), viewGenerator.MapDefinitions, actions, stats))) { var dynamicResults = mappedResultFromDocument.Select(x => (object)new DynamicJsonObject(RavenJObject.FromObject(x, jsonSerializer))).ToList(); foreach ( var doc in RobustEnumerationReduceDuringMapPhase(dynamicResults.GetEnumerator(), viewGenerator.ReduceDefinition, actions, context)) { count++; var reduceValue = viewGenerator.GroupByExtraction(doc); if (reduceValue == null) { logIndexing.Debug("Field {0} is used as the reduce key and cannot be null, skipping document {1}", viewGenerator.GroupByExtraction, mappedResultFromDocument.Key); continue; } var reduceKey = ReduceKeyToString(reduceValue); var docId = mappedResultFromDocument.Key.ToString(); var data = GetMappedData(doc); logIndexing.Debug("Mapped result for index '{0}' doc '{1}': '{2}'", name, docId, data); actions.MapReduce.PutMappedResult(name, docId, reduceKey, data); changed.Add(new ReduceKeyAndBucket(IndexingUtil.MapBucket(docId), reduceKey)); } } UpdateIndexingStats(context, stats); actions.MapReduce.ScheduleReductions(name, 0, changed); AddindexingPerformanceStat(new IndexingPerformanceStats { OutputCount = count, InputCount = sourceCount, Operation = "Map", Duration = sw.Elapsed, Started = start }); logIndexing.Debug("Mapped {0} documents for {1}", count, name); }
public IndexingStorageActions(TableStorage tableStorage, IUuidGenerator generator, Reference <SnapshotReader> snapshot, Reference <WriteBatch> writeBatch, IStorageActionsAccessor storageActionsAccessor, IBufferPool bufferPool) : base(snapshot, bufferPool) { this.tableStorage = tableStorage; this.generator = generator; this.writeBatch = writeBatch; this.currentStorageActionsAccessor = storageActionsAccessor; }
public override void IndexDocuments(AbstractViewGenerator viewGenerator, IEnumerable <object> documents, WorkContext context, IStorageActionsAccessor actions, DateTime minimumTimestamp) { var count = 0; var sourceCount = 0; var sw = Stopwatch.StartNew(); Write(context, (indexWriter, analyzer, stats) => { var processedKeys = new HashSet <string>(); var batchers = context.IndexUpdateTriggers.Select(x => x.CreateBatcher(name)) .Where(x => x != null) .ToList(); try { var documentsWrapped = documents.Select((dynamic doc) => { sourceCount++; if (doc.__document_id == null) { throw new ArgumentException(string.Format("Cannot index something which doesn't have a document id, but got: '{0}'", doc)); } string documentId = doc.__document_id.ToString(); if (processedKeys.Add(documentId) == false) { return(doc); } batchers.ApplyAndIgnoreAllErrors( exception => { logIndexing.WarnException( string.Format("Error when executed OnIndexEntryDeleted trigger for index '{0}', key: '{1}'", name, documentId), exception); context.AddError(name, documentId, exception.Message ); }, trigger => trigger.OnIndexEntryDeleted(documentId)); indexWriter.DeleteDocuments(new Term(Constants.DocumentIdFieldName, documentId.ToLowerInvariant())); return(doc); }) .Where(x => x is FilteredDocument == false); var anonymousObjectToLuceneDocumentConverter = new AnonymousObjectToLuceneDocumentConverter(indexDefinition); var luceneDoc = new Document(); var documentIdField = new Field(Constants.DocumentIdFieldName, "dummy", Field.Store.YES, Field.Index.NOT_ANALYZED_NO_NORMS); foreach (var doc in RobustEnumerationIndex(documentsWrapped, viewGenerator.MapDefinitions, actions, context, stats)) { float boost; var indexingResult = GetIndexingResult(doc, anonymousObjectToLuceneDocumentConverter, out boost); if (indexingResult.NewDocId != null && indexingResult.ShouldSkip == false) { count += 1; luceneDoc.GetFields().Clear(); luceneDoc.Boost = boost; documentIdField.SetValue(indexingResult.NewDocId.ToLowerInvariant()); luceneDoc.Add(documentIdField); foreach (var field in indexingResult.Fields) { luceneDoc.Add(field); } batchers.ApplyAndIgnoreAllErrors( exception => { logIndexing.WarnException( string.Format("Error when executed OnIndexEntryCreated trigger for index '{0}', key: '{1}'", name, indexingResult.NewDocId), exception); context.AddError(name, indexingResult.NewDocId, exception.Message ); }, trigger => trigger.OnIndexEntryCreated(indexingResult.NewDocId, luceneDoc)); LogIndexedDocument(indexingResult.NewDocId, luceneDoc); AddDocumentToIndex(indexWriter, luceneDoc, analyzer); } stats.IndexingSuccesses++; } } catch (Exception e) { batchers.ApplyAndIgnoreAllErrors( ex => { logIndexing.WarnException("Failed to notify index update trigger batcher about an error", ex); context.AddError(name, null, ex.Message); }, x => x.AnErrorOccured(e)); throw; } finally { batchers.ApplyAndIgnoreAllErrors( e => { logIndexing.WarnException("Failed to dispose on index update trigger", e); context.AddError(name, null, e.Message); }, x => x.Dispose()); } return(sourceCount); }); AddindexingPerformanceStat(new IndexingPerformanceStats { OutputCount = count, InputCount = sourceCount, Duration = sw.Elapsed, Operation = "Index" }); logIndexing.Debug("Indexed {0} documents for {1}", count, name); }
private void SaveSynchronizationReport(string fileName, IStorageActionsAccessor accessor, SynchronizationReport report) { var name = RavenFileNameHelper.SyncResultNameForFile(fileName); accessor.SetConfig(name, JsonExtensions.ToJObject(report)); }
public override void IndexDocuments( AbstractViewGenerator viewGenerator, IndexingBatch batch, IStorageActionsAccessor actions, DateTime minimumTimestamp) { var count = 0; var sourceCount = 0; var sw = Stopwatch.StartNew(); var start = SystemTime.UtcNow; var deleted = new Dictionary <ReduceKeyAndBucket, int>(); RecordCurrentBatch("Current Map", batch.Docs.Count); var documentsWrapped = batch.Docs.Select(doc => { sourceCount++; var documentId = doc.__document_id; actions.MapReduce.DeleteMappedResultsForDocumentId((string)documentId, name, deleted); return(doc); }) .Where(x => x is FilteredDocument == false) .ToList(); var allReferencedDocs = new ConcurrentQueue <IDictionary <string, HashSet <string> > >(); if (documentsWrapped.Count > 0) { actions.MapReduce.UpdateRemovedMapReduceStats(name, deleted); } var allState = new ConcurrentQueue <Tuple <HashSet <ReduceKeyAndBucket>, IndexingWorkStats, Dictionary <string, int> > >(); BackgroundTaskExecuter.Instance.ExecuteAllBuffered(context, documentsWrapped, partition => { var localStats = new IndexingWorkStats(); var localChanges = new HashSet <ReduceKeyAndBucket>(); var statsPerKey = new Dictionary <string, int>(); allState.Enqueue(Tuple.Create(localChanges, localStats, statsPerKey)); using (CurrentIndexingScope.Current = new CurrentIndexingScope(LoadDocument, allReferencedDocs.Enqueue)) { // we are writing to the transactional store from multiple threads here, and in a streaming fashion // should result in less memory and better perf context.TransactionalStorage.Batch(accessor => { var mapResults = RobustEnumerationIndex(partition, viewGenerator.MapDefinitions, localStats); var currentDocumentResults = new List <object>(); string currentKey = null; foreach (var currentDoc in mapResults) { var documentId = GetDocumentId(currentDoc); if (documentId != currentKey) { count += ProcessBatch(viewGenerator, currentDocumentResults, currentKey, localChanges, accessor, statsPerKey); currentDocumentResults.Clear(); currentKey = documentId; } currentDocumentResults.Add(new DynamicJsonObject(RavenJObject.FromObject(currentDoc, jsonSerializer))); Interlocked.Increment(ref localStats.IndexingSuccesses); } count += ProcessBatch(viewGenerator, currentDocumentResults, currentKey, localChanges, accessor, statsPerKey); }); } }); IDictionary <string, HashSet <string> > result; while (allReferencedDocs.TryDequeue(out result)) { foreach (var referencedDocument in result) { actions.Indexing.UpdateDocumentReferences(name, referencedDocument.Key, referencedDocument.Value); actions.General.MaybePulseTransaction(); } } var changed = allState.SelectMany(x => x.Item1).Concat(deleted.Keys) .Distinct() .ToList(); var stats = new IndexingWorkStats(allState.Select(x => x.Item2)); var reduceKeyStats = allState.SelectMany(x => x.Item3) .GroupBy(x => x.Key) .Select(g => new { g.Key, Count = g.Sum(x => x.Value) }) .ToList(); BackgroundTaskExecuter.Instance.ExecuteAllBuffered(context, reduceKeyStats, enumerator => context.TransactionalStorage.Batch(accessor => { while (enumerator.MoveNext()) { var reduceKeyStat = enumerator.Current; accessor.MapReduce.IncrementReduceKeyCounter(name, reduceKeyStat.Key, reduceKeyStat.Count); } })); BackgroundTaskExecuter.Instance.ExecuteAllBuffered(context, changed, enumerator => context.TransactionalStorage.Batch(accessor => { while (enumerator.MoveNext()) { accessor.MapReduce.ScheduleReductions(name, 0, enumerator.Current); } })); UpdateIndexingStats(context, stats); AddindexingPerformanceStat(new IndexingPerformanceStats { OutputCount = count, ItemsCount = sourceCount, InputCount = documentsWrapped.Count, Operation = "Map", Duration = sw.Elapsed, Started = start }); BatchCompleted("Current Map"); logIndexing.Debug("Mapped {0} documents for {1}", count, name); }
private void ReplicateDocument(IStorageActionsAccessor actions, string id, RavenJObject metadata, RavenJObject document, string src) { var existingDoc = actions.Documents.DocumentByKey(id, null); if (existingDoc == null) { log.Debug("New document {0} replicated successfully from {1}", id, src); actions.Documents.AddDocument(id, Guid.Empty, document, metadata); return; } var existingDocumentIsInConflict = existingDoc.Metadata[ReplicationConstants.RavenReplicationConflict] != null; if (existingDocumentIsInConflict == false && // if the current document is not in conflict, we can continue without having to keep conflict semantics (IsDirectChildOfCurrentDocument(existingDoc, metadata))) // this update is direct child of the existing doc, so we are fine with overwriting this { log.Debug("Existing document {0} replicated successfully from {1}", id, src); actions.Documents.AddDocument(id, null, document, metadata); return; } if (ReplicationConflictResolvers.Any(replicationConflictResolver => replicationConflictResolver.TryResolve(id, metadata, document, existingDoc))) { actions.Documents.AddDocument(id, null, document, metadata); return; } var newDocumentConflictId = id + "/conflicts/" + metadata.Value <string>(ReplicationConstants.RavenReplicationSource) + "/" + metadata.Value <string>("@etag"); metadata.Add(ReplicationConstants.RavenReplicationConflict, RavenJToken.FromObject(true)); actions.Documents.AddDocument(newDocumentConflictId, null, document, metadata); if (existingDocumentIsInConflict) // the existing document is in conflict { log.Debug("Conflicted document {0} has a new version from {1}, adding to conflicted documents", id, src); // just update the current doc with the new conflict document existingDoc.DataAsJson.Value <RavenJArray>("Conflicts").Add(RavenJToken.FromObject(newDocumentConflictId)); actions.Documents.AddDocument(id, existingDoc.Etag, existingDoc.DataAsJson, existingDoc.Metadata); return; } log.Debug("Existing document {0} is in conflict with replicated version from {1}, marking document as conflicted", id, src); // we have a new conflict // move the existing doc to a conflict and create a conflict document var existingDocumentConflictId = id + "/conflicts/" + Database.TransactionalStorage.Id + "/" + existingDoc.Etag; existingDoc.Metadata.Add(ReplicationConstants.RavenReplicationConflict, RavenJToken.FromObject(true)); actions.Documents.AddDocument(existingDocumentConflictId, null, existingDoc.DataAsJson, existingDoc.Metadata); actions.Documents.AddDocument(id, null, new RavenJObject { { "Conflicts", new RavenJArray(existingDocumentConflictId, newDocumentConflictId) } }, new RavenJObject { { ReplicationConstants.RavenReplicationConflict, true }, { "@Http-Status-Code", 409 }, { "@Http-Status-Description", "Conflict" } }); }
protected override Task GetApplicableTask(IStorageActionsAccessor actions) { return((Task)actions.Tasks.GetMergedTask <RemoveFromIndexTask>() ?? actions.Tasks.GetMergedTask <TouchMissingReferenceDocumentTask>()); }
internal void CheckReferenceBecauseOfDocumentUpdate(string key, IStorageActionsAccessor actions) { TouchedDocumentInfo touch; RecentTouches.TryRemove(key, out touch); foreach (var referencing in actions.Indexing.GetDocumentsReferencing(key)) { Etag preTouchEtag = null; Etag afterTouchEtag = null; try { actions.Documents.TouchDocument(referencing, out preTouchEtag, out afterTouchEtag); var docMetadata = actions.Documents.DocumentMetadataByKey(referencing); if (docMetadata != null) { var entityName = docMetadata.Metadata.Value<string>(Constants.RavenEntityName); if(string.IsNullOrEmpty(entityName) == false) Database.LastCollectionEtags.Update(entityName, afterTouchEtag); } } catch (ConcurrencyException) { } if (preTouchEtag == null || afterTouchEtag == null) continue; actions.General.MaybePulseTransaction(); RecentTouches.Set(referencing, new TouchedDocumentInfo { PreTouchEtag = preTouchEtag, TouchedEtag = afterTouchEtag }); } }
protected override Task GetApplicableTask(IStorageActionsAccessor actions) { return actions.Tasks.GetMergedTask<ReduceTask>(); }
private void HandleIdleIndex(double age, double lastQuery, UnusedIndexState thisItem, IStorageActionsAccessor accessor) { // relatively young index, haven't been queried for a while already // can be safely removed, probably if (age < 90 && lastQuery > 30) { accessor.Indexing.DeleteIndex(thisItem.Index.indexId, documentDatabase.WorkContext.CancellationToken); return; } if (lastQuery < configuration.TimeToWaitBeforeMarkingIdleIndexAsAbandoned.TotalMinutes) return; // old enough, and haven't been queried for a while, mark it as abandoned accessor.Indexing.SetIndexPriority(thisItem.Index.indexId, IndexingPriority.Abandoned); thisItem.Index.Priority = IndexingPriority.Abandoned; documentDatabase.Notifications.RaiseNotifications(new IndexChangeNotification() { Name = thisItem.Name, Type = IndexChangeTypes.IndexDemotedToAbandoned }); }
public override void IndexDocuments( AbstractViewGenerator viewGenerator, IEnumerable <dynamic> documents, WorkContext context, IStorageActionsAccessor actions, DateTime minimumTimestamp) { var count = 0; // we mark the reduce keys to delete when we delete the mapped results, then we remove // any reduce key that is actually being used to generate new mapped results // this way, only reduces that removed data will force us to use the tasks approach var reduceKeysToDelete = new HashSet <string>(StringComparer.InvariantCultureIgnoreCase); var documentsWrapped = documents.Select(doc => { var documentId = doc.__document_id; foreach (var reduceKey in actions.MappedResults.DeleteMappedResultsForDocumentId((string)documentId, name)) { reduceKeysToDelete.Add(reduceKey); } return(doc); }); var stats = new IndexingWorkStats(); foreach (var mappedResultFromDocument in GroupByDocumentId(context, RobustEnumerationIndex(documentsWrapped, viewGenerator.MapDefinitions, actions, context, stats))) { foreach (var doc in RobustEnumerationReduceDuringMapPhase(mappedResultFromDocument, viewGenerator.ReduceDefinition, actions, context)) { count++; var reduceValue = viewGenerator.GroupByExtraction(doc); if (reduceValue == null) { logIndexing.Debug("Field {0} is used as the reduce key and cannot be null, skipping document {1}", viewGenerator.GroupByExtraction, mappedResultFromDocument.Key); continue; } var reduceKey = ReduceKeyToString(reduceValue); var docId = mappedResultFromDocument.Key.ToString(); reduceKeysToDelete.Remove((string)reduceKey); var data = GetMappedData(doc); logIndexing.Debug("Mapped result for index '{0}' doc '{1}': '{2}'", name, docId, data); var hash = ComputeHash(name, reduceKey); actions.MappedResults.PutMappedResult(name, docId, reduceKey, data, hash); } } UpdateIndexingStats(context, stats); if (reduceKeysToDelete.Count > 0) { actions.Tasks.AddTask(new ReduceTask { Index = name, ReduceKeys = reduceKeysToDelete.ToArray() }, minimumTimestamp); } logIndexing.Debug("Mapped {0} documents for {1}", count, name); }
// This method may be called concurrently, by both the ReduceTask (for removal) // and by the ReducingExecuter (for add/modify). This is okay with us, since the // Write() call is already handling locking properly public void ReduceDocuments(AbstractViewGenerator viewGenerator, IEnumerable <object> mappedResults, WorkContext context, IStorageActionsAccessor actions, string[] reduceKeys) { var count = 0; Write(context, (indexWriter, analyzer, stats) => { stats.Operation = IndexingWorkStats.Status.Reduce; var batchers = context.IndexUpdateTriggers.Select(x => x.CreateBatcher(name)) .Where(x => x != null) .ToList(); foreach (var reduceKey in reduceKeys) { var entryKey = reduceKey; indexWriter.DeleteDocuments(new Term(Constants.ReduceKeyFieldName, entryKey.ToLowerInvariant())); batchers.ApplyAndIgnoreAllErrors( exception => { logIndexing.WarnException( string.Format("Error when executed OnIndexEntryDeleted trigger for index '{0}', key: '{1}'", name, entryKey), exception); context.AddError(name, entryKey, exception.Message); }, trigger => trigger.OnIndexEntryDeleted(entryKey)); } PropertyDescriptorCollection properties = null; var anonymousObjectToLuceneDocumentConverter = new AnonymousObjectToLuceneDocumentConverter(indexDefinition); var luceneDoc = new Document(); var reduceKeyField = new Field(Constants.ReduceKeyFieldName, "dummy", Field.Store.NO, Field.Index.NOT_ANALYZED_NO_NORMS); foreach (var doc in RobustEnumerationReduce(mappedResults, viewGenerator.ReduceDefinition, actions, context, stats)) { count++; float boost; var fields = GetFields(anonymousObjectToLuceneDocumentConverter, doc, ref properties, out boost).ToList(); string reduceKeyAsString = ExtractReduceKey(viewGenerator, doc); reduceKeyField.SetValue(reduceKeyAsString.ToLowerInvariant()); luceneDoc.GetFields().Clear(); luceneDoc.SetBoost(boost); luceneDoc.Add(reduceKeyField); foreach (var field in fields) { luceneDoc.Add(field); } batchers.ApplyAndIgnoreAllErrors( exception => { logIndexing.WarnException( string.Format("Error when executed OnIndexEntryCreated trigger for index '{0}', key: '{1}'", name, reduceKeyAsString), exception); context.AddError(name, reduceKeyAsString, exception.Message); }, trigger => trigger.OnIndexEntryCreated(reduceKeyAsString, luceneDoc)); LogIndexedDocument(reduceKeyAsString, luceneDoc); AddDocumentToIndex(indexWriter, luceneDoc, analyzer); stats.ReduceSuccesses++; } batchers.ApplyAndIgnoreAllErrors( e => { logIndexing.WarnException("Failed to dispose on index update trigger", e); context.AddError(name, null, e.Message); }, x => x.Dispose()); return(count + reduceKeys.Length); }); logIndexing.Debug(() => string.Format("Reduce resulted in {0} entries for {1} for reduce keys: {2}", count, name, string.Join(", ", reduceKeys))); }
protected override bool IsIndexStale(IndexStats indexesStat, Etag synchronizationEtag, IStorageActionsAccessor actions, bool isIdle, Reference <bool> onlyFoundIdleWork) { if (indexesStat.LastIndexedEtag.CompareTo(synchronizationEtag) > 0) { return(true); } var isStale = actions.Staleness.IsMapStale(indexesStat.Name); var indexingPriority = indexesStat.Priority; if (isStale == false) { return(false); } if (indexingPriority == IndexingPriority.None) { return(true); } if (indexingPriority.HasFlag(IndexingPriority.Normal)) { onlyFoundIdleWork.Value = false; return(true); } if (indexingPriority.HasFlag(IndexingPriority.Disabled)) { return(false); } if (isIdle == false) { return(false); // everything else is only valid on idle runs } if (indexingPriority.HasFlag(IndexingPriority.Idle)) { return(true); } if (indexingPriority.HasFlag(IndexingPriority.Abandoned)) { var timeSinceLastIndexing = (SystemTime.UtcNow - indexesStat.LastIndexingTime); return(timeSinceLastIndexing > context.Configuration.TimeToWaitBeforeRunningAbandonedIndexes); } throw new InvalidOperationException("Unknown indexing priority for index " + indexesStat.Name + ": " + indexesStat.Priority); }
protected abstract Task GetApplicableTask(IStorageActionsAccessor actions);
public static bool IsVersioningActive(this IStorageActionsAccessor accessor, string filePath) { var versioningConfiguration = GetVersioningConfiguration(accessor, filePath); return(versioningConfiguration != null && versioningConfiguration.Exclude == false); }
public override void IndexDocuments(AbstractViewGenerator viewGenerator, IEnumerable <object> documents, WorkContext context, IStorageActionsAccessor actions, DateTime minimumTimestamp) { actions.Indexing.SetCurrentIndexStatsTo(name); var count = 0; Write(context, (indexWriter, analyzer) => { bool madeChanges = false; PropertyDescriptorCollection properties = null; var processedKeys = new HashSet <string>(); var batchers = context.IndexUpdateTriggers.Select(x => x.CreateBatcher(name)) .Where(x => x != null) .ToList(); var documentsWrapped = documents.Select((dynamic doc) => { if (doc.__document_id == null) { throw new ArgumentException("Cannot index something which doesn't have a document id, but got: " + doc); } string documentId = doc.__document_id.ToString(); if (processedKeys.Add(documentId) == false) { return(doc); } madeChanges = true; batchers.ApplyAndIgnoreAllErrors( exception => { logIndexing.WarnException( string.Format("Error when executed OnIndexEntryDeleted trigger for index '{0}', key: '{1}'", name, documentId), exception); context.AddError(name, documentId, exception.Message ); }, trigger => trigger.OnIndexEntryDeleted(documentId)); indexWriter.DeleteDocuments(new Term(Constants.DocumentIdFieldName, documentId.ToLowerInvariant())); return(doc); }); var anonymousObjectToLuceneDocumentConverter = new AnonymousObjectToLuceneDocumentConverter(indexDefinition); var luceneDoc = new Document(); var documentIdField = new Field(Constants.DocumentIdFieldName, "dummy", Field.Store.YES, Field.Index.NOT_ANALYZED_NO_NORMS); foreach (var doc in RobustEnumerationIndex(documentsWrapped, viewGenerator.MapDefinition, actions, context)) { count++; IndexingResult indexingResult; if (doc is DynamicJsonObject) { indexingResult = ExtractIndexDataFromDocument(anonymousObjectToLuceneDocumentConverter, (DynamicJsonObject)doc); } else { indexingResult = ExtractIndexDataFromDocument(anonymousObjectToLuceneDocumentConverter, properties, doc); } if (indexingResult.NewDocId != null && indexingResult.ShouldSkip == false) { madeChanges = true; luceneDoc.GetFields().Clear(); documentIdField.SetValue(indexingResult.NewDocId.ToLowerInvariant()); luceneDoc.Add(documentIdField); foreach (var field in indexingResult.Fields) { luceneDoc.Add(field); } batchers.ApplyAndIgnoreAllErrors( exception => { logIndexing.WarnException( string.Format("Error when executed OnIndexEntryCreated trigger for index '{0}', key: '{1}'", name, indexingResult.NewDocId), exception); context.AddError(name, indexingResult.NewDocId, exception.Message ); }, trigger => trigger.OnIndexEntryCreated(indexingResult.NewDocId, luceneDoc)); logIndexing.Debug("Index '{0}' resulted in: {1}", name, luceneDoc); AddDocumentToIndex(indexWriter, luceneDoc, analyzer); } actions.Indexing.IncrementSuccessIndexing(); } batchers.ApplyAndIgnoreAllErrors( e => { logIndexing.WarnException("Failed to dispose on index update trigger", e); context.AddError(name, null, e.Message); }, x => x.Dispose()); return(madeChanges); }); logIndexing.Debug("Indexed {0} documents for {1}", count, name); }
internal void DeleteDocumentFromIndexesForCollection(string key, string collection, IStorageActionsAccessor actions) { foreach (var indexName in IndexDefinitionStorage.IndexNames) { AbstractViewGenerator abstractViewGenerator = IndexDefinitionStorage.GetViewGenerator(indexName); if (abstractViewGenerator == null) { continue; } if (collection != null && // the document has a entity name abstractViewGenerator.ForEntityNames.Count > 0) // the index operations on specific entities { if (abstractViewGenerator.ForEntityNames.Contains(collection) == false) { continue; } } var instance = IndexDefinitionStorage.GetIndexDefinition(indexName); var task = actions.GetTask(x => x.Index == instance.IndexId, new RemoveFromIndexTask { Index = instance.IndexId }); task.Keys.Add(key); } }
internal void CheckReferenceBecauseOfDocumentUpdate(string key, IStorageActionsAccessor actions, string[] participatingIds = null) { TouchedDocumentInfo touch; RecentTouches.TryRemove(key, out touch); Stopwatch sp = null; int count = 0; using (Database.TransactionalStorage.DisableBatchNesting()) { // in external transaction number of references will be >= from current transaction references Database.TransactionalStorage.Batch(externalActions => { var referencingKeys = externalActions.Indexing.GetDocumentsReferencing(key); if (participatingIds != null) { referencingKeys = referencingKeys.Except(participatingIds); } foreach (var referencing in referencingKeys) { Etag preTouchEtag = null; Etag afterTouchEtag = null; try { count++; actions.Documents.TouchDocument(referencing, out preTouchEtag, out afterTouchEtag); if (afterTouchEtag != null) { var docMetadata = actions.Documents.DocumentMetadataByKey(referencing); if (docMetadata != null) { var entityName = docMetadata.Metadata.Value <string>(Constants.RavenEntityName); if (string.IsNullOrEmpty(entityName) == false) { Database.LastCollectionEtags.Update(entityName, afterTouchEtag); } } } } catch (ConcurrencyException) { } if (preTouchEtag == null || afterTouchEtag == null) { continue; } if (actions.General.MaybePulseTransaction()) { if (sp == null) { sp = Stopwatch.StartNew(); } if (sp.Elapsed >= TimeSpan.FromSeconds(30)) { throw new TimeoutException("Early failure when checking references for document '" + key + "', we waited over 30 seconds to touch all of the documents referenced by this document.\r\n" + "The operation (and transaction) has been aborted, since to try longer (we already touched " + count + " documents) risk a thread abort.\r\n" + "Consider restructuring your indexes to avoid LoadDocument on such a popular document."); } } RecentTouches.Set(referencing, new TouchedDocumentInfo { PreTouchEtag = preTouchEtag, TouchedEtag = afterTouchEtag }); } }); } }
private void MarkIndexes(IndexToWorkOn indexToWorkOn, ComparableByteArray lastIndexedEtag, IStorageActionsAccessor actions, Guid lastEtag, DateTime lastModified) { if (new ComparableByteArray(indexToWorkOn.LastIndexedEtag.ToByteArray()).CompareTo(lastIndexedEtag) > 0) { return; } actions.Indexing.UpdateLastIndexed(indexToWorkOn.IndexName, lastEtag, lastModified); }
public ReduceDocuments(MapReduceIndex parent, AbstractViewGenerator viewGenerator, IEnumerable <IGrouping <int, object> > mappedResultsByBucket, int level, WorkContext context, IStorageActionsAccessor actions, HashSet <string> reduceKeys, int inputCount) { this.parent = parent; this.inputCount = inputCount; indexId = this.parent.indexId; ViewGenerator = viewGenerator; MappedResultsByBucket = mappedResultsByBucket; Level = level; Context = context; Actions = actions; ReduceKeys = reduceKeys; anonymousObjectToLuceneDocumentConverter = new AnonymousObjectToLuceneDocumentConverter(this.parent.context.Database, this.parent.indexDefinition, ViewGenerator, logIndexing); if (Level == 2) { batchers = Context.IndexUpdateTriggers.Select(x => x.CreateBatcher(indexId)) .Where(x => x != null) .ToList(); } }
protected abstract bool IsIndexStale(IndexStats indexesStat, Etag synchronizationEtag, IStorageActionsAccessor actions, bool isIdle, Reference <bool> onlyFoundIdleWork);
public override IndexingPerformanceStats IndexDocuments(AbstractViewGenerator viewGenerator, IndexingBatch batch, IStorageActionsAccessor actions, DateTime minimumTimestamp, CancellationToken token) { token.ThrowIfCancellationRequested(); var count = 0; var sourceCount = batch.Docs.Count; var deleted = new Dictionary <ReduceKeyAndBucket, int>(); var performance = RecordCurrentBatch("Current Map", "Map", batch.Docs.Count); var performanceStats = new List <BasePerformanceStats>(); var usedStorageAccessors = new ConcurrentSet <IStorageActionsAccessor>(); if (usedStorageAccessors.TryAdd(actions)) { var storageCommitDuration = new Stopwatch(); actions.BeforeStorageCommit += storageCommitDuration.Start; actions.AfterStorageCommit += () => { storageCommitDuration.Stop(); performanceStats.Add(PerformanceStats.From(IndexingOperation.StorageCommit, storageCommitDuration.ElapsedMilliseconds)); }; } List <dynamic> documentsWrapped; if (actions.MapReduce.HasMappedResultsForIndex(indexId) == false) { //new index documentsWrapped = batch.Docs.Where(x => x is FilteredDocument == false).ToList(); } else { var deleteMappedResultsDuration = new Stopwatch(); documentsWrapped = batch.Docs.Select(doc => { token.ThrowIfCancellationRequested(); var documentId = doc.__document_id; using (StopwatchScope.For(deleteMappedResultsDuration)) { actions.MapReduce.DeleteMappedResultsForDocumentId((string)documentId, indexId, deleted); } return(doc); }) .Where(x => x is FilteredDocument == false) .ToList(); performanceStats.Add(new PerformanceStats { Name = IndexingOperation.Map_DeleteMappedResults, DurationMs = deleteMappedResultsDuration.ElapsedMilliseconds, }); } var allReferencedDocs = new ConcurrentQueue <IDictionary <string, HashSet <string> > >(); var allReferenceEtags = new ConcurrentQueue <IDictionary <string, Etag> >(); var allState = new ConcurrentQueue <Tuple <HashSet <ReduceKeyAndBucket>, IndexingWorkStats, Dictionary <string, int> > >(); var parallelOperations = new ConcurrentQueue <ParallelBatchStats>(); var parallelProcessingStart = SystemTime.UtcNow; BackgroundTaskExecuter.Instance.ExecuteAllBuffered(context, documentsWrapped, partition => { token.ThrowIfCancellationRequested(); var parallelStats = new ParallelBatchStats { StartDelay = (long)(SystemTime.UtcNow - parallelProcessingStart).TotalMilliseconds }; var localStats = new IndexingWorkStats(); var localChanges = new HashSet <ReduceKeyAndBucket>(); var statsPerKey = new Dictionary <string, int>(); var linqExecutionDuration = new Stopwatch(); var reduceInMapLinqExecutionDuration = new Stopwatch(); var putMappedResultsDuration = new Stopwatch(); var convertToRavenJObjectDuration = new Stopwatch(); allState.Enqueue(Tuple.Create(localChanges, localStats, statsPerKey)); using (CurrentIndexingScope.Current = new CurrentIndexingScope(context.Database, PublicName)) { // we are writing to the transactional store from multiple threads here, and in a streaming fashion // should result in less memory and better perf context.TransactionalStorage.Batch(accessor => { if (usedStorageAccessors.TryAdd(accessor)) { var storageCommitDuration = new Stopwatch(); accessor.BeforeStorageCommit += storageCommitDuration.Start; accessor.AfterStorageCommit += () => { storageCommitDuration.Stop(); parallelStats.Operations.Add(PerformanceStats.From(IndexingOperation.StorageCommit, storageCommitDuration.ElapsedMilliseconds)); }; } var mapResults = RobustEnumerationIndex(partition, viewGenerator.MapDefinitions, localStats, linqExecutionDuration); var currentDocumentResults = new List <object>(); string currentKey = null; bool skipDocument = false; foreach (var currentDoc in mapResults) { token.ThrowIfCancellationRequested(); var documentId = GetDocumentId(currentDoc); if (documentId != currentKey) { count += ProcessBatch(viewGenerator, currentDocumentResults, currentKey, localChanges, accessor, statsPerKey, reduceInMapLinqExecutionDuration, putMappedResultsDuration, convertToRavenJObjectDuration); currentDocumentResults.Clear(); currentKey = documentId; } else if (skipDocument) { continue; } RavenJObject currentDocJObject; using (StopwatchScope.For(convertToRavenJObjectDuration)) { currentDocJObject = RavenJObject.FromObject(currentDoc, jsonSerializer); } currentDocumentResults.Add(new DynamicJsonObject(currentDocJObject)); if (EnsureValidNumberOfOutputsForDocument(documentId, currentDocumentResults.Count) == false) { skipDocument = true; currentDocumentResults.Clear(); continue; } Interlocked.Increment(ref localStats.IndexingSuccesses); } count += ProcessBatch(viewGenerator, currentDocumentResults, currentKey, localChanges, accessor, statsPerKey, reduceInMapLinqExecutionDuration, putMappedResultsDuration, convertToRavenJObjectDuration); parallelStats.Operations.Add(PerformanceStats.From(IndexingOperation.LoadDocument, CurrentIndexingScope.Current.LoadDocumentDuration.ElapsedMilliseconds)); parallelStats.Operations.Add(PerformanceStats.From(IndexingOperation.Linq_MapExecution, linqExecutionDuration.ElapsedMilliseconds)); parallelStats.Operations.Add(PerformanceStats.From(IndexingOperation.Linq_ReduceLinqExecution, reduceInMapLinqExecutionDuration.ElapsedMilliseconds)); parallelStats.Operations.Add(PerformanceStats.From(IndexingOperation.Map_PutMappedResults, putMappedResultsDuration.ElapsedMilliseconds)); parallelStats.Operations.Add(PerformanceStats.From(IndexingOperation.Map_ConvertToRavenJObject, convertToRavenJObjectDuration.ElapsedMilliseconds)); parallelOperations.Enqueue(parallelStats); }); allReferenceEtags.Enqueue(CurrentIndexingScope.Current.ReferencesEtags); allReferencedDocs.Enqueue(CurrentIndexingScope.Current.ReferencedDocuments); } }); performanceStats.Add(new ParallelPerformanceStats { NumberOfThreads = parallelOperations.Count, DurationMs = (long)(SystemTime.UtcNow - parallelProcessingStart).TotalMilliseconds, BatchedOperations = parallelOperations.ToList() }); var updateDocumentReferencesDuration = new Stopwatch(); using (StopwatchScope.For(updateDocumentReferencesDuration)) { UpdateDocumentReferences(actions, allReferencedDocs, allReferenceEtags); } performanceStats.Add(PerformanceStats.From(IndexingOperation.UpdateDocumentReferences, updateDocumentReferencesDuration.ElapsedMilliseconds)); var changed = allState.SelectMany(x => x.Item1).Concat(deleted.Keys) .Distinct() .ToList(); var stats = new IndexingWorkStats(allState.Select(x => x.Item2)); var reduceKeyStats = allState.SelectMany(x => x.Item3) .GroupBy(x => x.Key) .Select(g => new { g.Key, Count = g.Sum(x => x.Value) }) .ToList(); var reduceKeyToCount = new ConcurrentDictionary <string, int>(); foreach (var singleDeleted in deleted) { var reduceKey = singleDeleted.Key.ReduceKey; reduceKeyToCount[reduceKey] = reduceKeyToCount.GetOrDefault(reduceKey) + singleDeleted.Value; } BackgroundTaskExecuter.Instance.ExecuteAllBuffered(context, reduceKeyStats, enumerator => context.TransactionalStorage.Batch(accessor => { while (enumerator.MoveNext()) { var reduceKeyStat = enumerator.Current; var value = 0; reduceKeyToCount.TryRemove(reduceKeyStat.Key, out value); var changeValue = reduceKeyStat.Count - value; if (changeValue == 0) { // nothing to change continue; } accessor.MapReduce.IncrementReduceKeyCounter(indexId, reduceKeyStat.Key, changeValue); } })); foreach (var keyValuePair in reduceKeyToCount) { // those are the remaining keys that weren't used, // reduce keys that were replaced actions.MapReduce.IncrementReduceKeyCounter(indexId, keyValuePair.Key, -keyValuePair.Value); } actions.General.MaybePulseTransaction(); var parallelReductionOperations = new ConcurrentQueue <ParallelBatchStats>(); var parallelReductionStart = SystemTime.UtcNow; BackgroundTaskExecuter.Instance.ExecuteAllBuffered(context, changed, enumerator => context.TransactionalStorage.Batch(accessor => { var parallelStats = new ParallelBatchStats { StartDelay = (long)(SystemTime.UtcNow - parallelReductionStart).TotalMilliseconds }; var scheduleReductionsDuration = new Stopwatch(); using (StopwatchScope.For(scheduleReductionsDuration)) { while (enumerator.MoveNext()) { accessor.MapReduce.ScheduleReductions(indexId, 0, enumerator.Current); accessor.General.MaybePulseTransaction(); } } parallelStats.Operations.Add(PerformanceStats.From(IndexingOperation.Map_ScheduleReductions, scheduleReductionsDuration.ElapsedMilliseconds)); parallelReductionOperations.Enqueue(parallelStats); })); performanceStats.Add(new ParallelPerformanceStats { NumberOfThreads = parallelReductionOperations.Count, DurationMs = (long)(SystemTime.UtcNow - parallelReductionStart).TotalMilliseconds, BatchedOperations = parallelReductionOperations.ToList() }); UpdateIndexingStats(context, stats); performance.OnCompleted = () => BatchCompleted("Current Map", "Map", sourceCount, count, performanceStats); logIndexing.Debug("Mapped {0} documents for {1}", count, PublicName); return(performance); }
public DatabaseQueryOperation(DocumentDatabase database, string indexName, IndexQuery query, IStorageActionsAccessor actions, CancellationTokenSource cancellationTokenSource) { this.database = database; this.indexName = indexName != null?indexName.Trim() : null; this.query = query; this.actions = actions; cancellationToken = cancellationTokenSource.Token; queryStat = database.Queries.AddToCurrentlyRunningQueryList(indexName, query, cancellationTokenSource); if (query.ShowTimings == false) { return; } executionTimes[QueryTimings.Lucene] = 0; executionTimes[QueryTimings.LoadDocuments] = 0; executionTimes[QueryTimings.TransformResults] = 0; }
protected override bool IsIndexStale(IndexStats indexesStat, IStorageActionsAccessor actions, bool isIdle, Reference<bool> onlyFoundIdleWork) { onlyFoundIdleWork.Value = false; return actions.Staleness.IsReduceStale(indexesStat.Name); }
public override void IndexDocuments(AbstractViewGenerator viewGenerator, IndexingBatch batch, IStorageActionsAccessor actions, DateTime minimumTimestamp) { var count = 0; var sourceCount = 0; var sw = Stopwatch.StartNew(); var start = SystemTime.UtcNow; Write((indexWriter, analyzer, stats) => { var processedKeys = new HashSet <string>(); var batchers = context.IndexUpdateTriggers.Select(x => x.CreateBatcher(name)) .Where(x => x != null) .ToList(); try { RecordCurrentBatch("Current", batch.Docs.Count); var docIdTerm = new Term(Constants.DocumentIdFieldName); var documentsWrapped = batch.Docs.Select((doc, i) => { Interlocked.Increment(ref sourceCount); if (doc.__document_id == null) { throw new ArgumentException( string.Format("Cannot index something which doesn't have a document id, but got: '{0}'", doc)); } string documentId = doc.__document_id.ToString(); if (processedKeys.Add(documentId) == false) { return(doc); } batchers.ApplyAndIgnoreAllErrors( exception => { logIndexing.WarnException( string.Format("Error when executed OnIndexEntryDeleted trigger for index '{0}', key: '{1}'", name, documentId), exception); context.AddError(name, documentId, exception.Message, "OnIndexEntryDeleted Trigger" ); }, trigger => trigger.OnIndexEntryDeleted(documentId)); if (batch.SkipDeleteFromIndex[i] == false || context.ShouldRemoveFromIndex(documentId)) // maybe it is recently deleted? { indexWriter.DeleteDocuments(docIdTerm.CreateTerm(documentId.ToLowerInvariant())); } return(doc); }) .Where(x => x is FilteredDocument == false) .ToList(); var allReferencedDocs = new ConcurrentQueue <IDictionary <string, HashSet <string> > >(); BackgroundTaskExecuter.Instance.ExecuteAllBuffered(context, documentsWrapped, (partition) => { var anonymousObjectToLuceneDocumentConverter = new AnonymousObjectToLuceneDocumentConverter(indexDefinition, viewGenerator); var luceneDoc = new Document(); var documentIdField = new Field(Constants.DocumentIdFieldName, "dummy", Field.Store.YES, Field.Index.NOT_ANALYZED_NO_NORMS); using (CurrentIndexingScope.Current = new CurrentIndexingScope(LoadDocument, allReferencedDocs.Enqueue)) { foreach (var doc in RobustEnumerationIndex(partition, viewGenerator.MapDefinitions, stats)) { float boost; var indexingResult = GetIndexingResult(doc, anonymousObjectToLuceneDocumentConverter, out boost); if (indexingResult.NewDocId != null && indexingResult.ShouldSkip == false) { Interlocked.Increment(ref count); luceneDoc.GetFields().Clear(); luceneDoc.Boost = boost; documentIdField.SetValue(indexingResult.NewDocId.ToLowerInvariant()); luceneDoc.Add(documentIdField); foreach (var field in indexingResult.Fields) { luceneDoc.Add(field); } batchers.ApplyAndIgnoreAllErrors( exception => { logIndexing.WarnException( string.Format("Error when executed OnIndexEntryCreated trigger for index '{0}', key: '{1}'", name, indexingResult.NewDocId), exception); context.AddError(name, indexingResult.NewDocId, exception.Message, "OnIndexEntryCreated Trigger" ); }, trigger => trigger.OnIndexEntryCreated(indexingResult.NewDocId, luceneDoc)); LogIndexedDocument(indexingResult.NewDocId, luceneDoc); AddDocumentToIndex(indexWriter, luceneDoc, analyzer); } Interlocked.Increment(ref stats.IndexingSuccesses); } } }); IDictionary <string, HashSet <string> > result; while (allReferencedDocs.TryDequeue(out result)) { foreach (var referencedDocument in result) { actions.Indexing.UpdateDocumentReferences(name, referencedDocument.Key, referencedDocument.Value); } } } catch (Exception e) { batchers.ApplyAndIgnoreAllErrors( ex => { logIndexing.WarnException("Failed to notify index update trigger batcher about an error", ex); context.AddError(name, null, ex.Message, "AnErrorOccured Trigger"); }, x => x.AnErrorOccured(e)); throw; } finally { batchers.ApplyAndIgnoreAllErrors( e => { logIndexing.WarnException("Failed to dispose on index update trigger", e); context.AddError(name, null, e.Message, "Dispose Trigger"); }, x => x.Dispose()); BatchCompleted("Current"); } return(new IndexedItemsInfo { ChangedDocs = sourceCount, HighestETag = batch.HighestEtagInBatch }); }); AddindexingPerformanceStat(new IndexingPerformanceStats { OutputCount = count, ItemsCount = sourceCount, InputCount = batch.Docs.Count, Duration = sw.Elapsed, Operation = "Index", Started = start }); logIndexing.Debug("Indexed {0} documents for {1}", count, name); }
protected override bool IsIndexStale(IndexStats indexesStat, IStorageActionsAccessor actions) { return(actions.Staleness.IsMapStale(indexesStat.Name)); }
public void UnlockByDeletingSyncConfiguration(string fileName, IStorageActionsAccessor accessor) { accessor.DeleteConfig(RavenFileNameHelper.SyncLockNameForFile(fileName)); log.Debug("File '{0}' was unlocked", fileName); }
private void HandleActiveIndex(UnusedIndexState thisItem, double age, double lastQuery, IStorageActionsAccessor accessor, double timeToWaitForIdle) { if (age < (timeToWaitForIdle * 2.5) && lastQuery < (1.5 * timeToWaitForIdle)) return; if (age < (timeToWaitForIdle * 6) && lastQuery < (2.5 * timeToWaitForIdle)) return; accessor.Indexing.SetIndexPriority(thisItem.Index.indexId, IndexingPriority.Idle); thisItem.Index.Priority = IndexingPriority.Idle; documentDatabase.Notifications.RaiseNotifications(new IndexChangeNotification() { Name = thisItem.Name, Type = IndexChangeTypes.IndexDemotedToIdle }); }
protected override Task GetApplicableTask(IStorageActionsAccessor actions) { return(actions.Tasks.GetMergedTask <RemoveFromIndexTask>()); }
protected override bool IsIndexStale(IndexStats indexesStat, IStorageActionsAccessor actions) { return actions.Staleness.IsReduceStale(indexesStat.Name); }
public DocumentRetriever(IStorageActionsAccessor actions, OrderedPartCollection <AbstractReadTrigger> triggers) { this.actions = actions; this.triggers = triggers; }
public override void IndexDocuments(AbstractViewGenerator viewGenerator, IndexingBatch batch, IStorageActionsAccessor actions, DateTime minimumTimestamp) { var count = 0; var sourceCount = 0; var sw = Stopwatch.StartNew(); var start = SystemTime.UtcNow; int loadDocumentCount = 0; long loadDocumentDuration = 0; Write((indexWriter, analyzer, stats) => { var processedKeys = new HashSet<string>(); var batchers = context.IndexUpdateTriggers.Select(x => x.CreateBatcher(indexId)) .Where(x => x != null) .ToList(); try { var indexingPerfStats = RecordCurrentBatch("Current", batch.Docs.Count); batch.SetIndexingPerformance(indexingPerfStats); var docIdTerm = new Term(Constants.DocumentIdFieldName); var documentsWrapped = batch.Docs.Select((doc, i) => { Interlocked.Increment(ref sourceCount); if (doc.__document_id == null) throw new ArgumentException( string.Format("Cannot index something which doesn't have a document id, but got: '{0}'", doc)); string documentId = doc.__document_id.ToString(); if (processedKeys.Add(documentId) == false) return doc; InvokeOnIndexEntryDeletedOnAllBatchers(batchers, docIdTerm.CreateTerm(documentId.ToLowerInvariant())); if (batch.SkipDeleteFromIndex[i] == false || context.ShouldRemoveFromIndex(documentId)) // maybe it is recently deleted? indexWriter.DeleteDocuments(docIdTerm.CreateTerm(documentId.ToLowerInvariant())); return doc; }) .Where(x => x is FilteredDocument == false) .ToList(); var allReferencedDocs = new ConcurrentQueue<IDictionary<string, HashSet<string>>>(); var allReferenceEtags = new ConcurrentQueue<IDictionary<string, Etag>>(); BackgroundTaskExecuter.Instance.ExecuteAllBuffered(context, documentsWrapped, (partition) => { var anonymousObjectToLuceneDocumentConverter = new AnonymousObjectToLuceneDocumentConverter(context.Database, indexDefinition, viewGenerator, logIndexing); var luceneDoc = new Document(); var documentIdField = new Field(Constants.DocumentIdFieldName, "dummy", Field.Store.YES, Field.Index.NOT_ANALYZED_NO_NORMS); using (CurrentIndexingScope.Current = new CurrentIndexingScope(context.Database, PublicName)) { string currentDocId = null; int outputPerDocId = 0; Action<Exception, object> onErrorFunc; bool skipDocument = false; foreach (var doc in RobustEnumerationIndex(partition, viewGenerator.MapDefinitions, stats, out onErrorFunc)) { float boost; IndexingResult indexingResult; try { indexingResult = GetIndexingResult(doc, anonymousObjectToLuceneDocumentConverter, out boost); } catch (Exception e) { onErrorFunc(e, doc); continue; } // ReSharper disable once RedundantBoolCompare --> code clarity if (indexingResult.NewDocId == null || indexingResult.ShouldSkip != false) { continue; } if (currentDocId != indexingResult.NewDocId) { currentDocId = indexingResult.NewDocId; outputPerDocId = 0; skipDocument = false; } if (skipDocument) continue; outputPerDocId++; if (EnsureValidNumberOfOutputsForDocument(currentDocId, outputPerDocId) == false) { skipDocument = true; continue; } Interlocked.Increment(ref count); luceneDoc.GetFields().Clear(); luceneDoc.Boost = boost; documentIdField.SetValue(indexingResult.NewDocId.ToLowerInvariant()); luceneDoc.Add(documentIdField); foreach (var field in indexingResult.Fields) { luceneDoc.Add(field); } batchers.ApplyAndIgnoreAllErrors( exception => { logIndexing.WarnException( string.Format( "Error when executed OnIndexEntryCreated trigger for index '{0}', key: '{1}'", indexId, indexingResult.NewDocId), exception); context.AddError(indexId, indexingResult.NewDocId, exception.Message, "OnIndexEntryCreated Trigger" ); }, trigger => trigger.OnIndexEntryCreated(indexingResult.NewDocId, luceneDoc)); LogIndexedDocument(indexingResult.NewDocId, luceneDoc); AddDocumentToIndex(indexWriter, luceneDoc, analyzer); Interlocked.Increment(ref stats.IndexingSuccesses); } allReferenceEtags.Enqueue(CurrentIndexingScope.Current.ReferencesEtags); allReferencedDocs.Enqueue(CurrentIndexingScope.Current.ReferencedDocuments); Interlocked.Add(ref loadDocumentCount, CurrentIndexingScope.Current.LoadDocumentCount); Interlocked.Add(ref loadDocumentDuration, CurrentIndexingScope.Current.LoadDocumentDuration.ElapsedMilliseconds); } }); UpdateDocumentReferences(actions, allReferencedDocs, allReferenceEtags); } catch (Exception e) { batchers.ApplyAndIgnoreAllErrors( ex => { logIndexing.WarnException("Failed to notify index update trigger batcher about an error", ex); context.AddError(indexId, null, ex.Message, "AnErrorOccured Trigger"); }, x => x.AnErrorOccured(e)); throw; } finally { batchers.ApplyAndIgnoreAllErrors( e => { logIndexing.WarnException("Failed to dispose on index update trigger", e); context.AddError(indexId, null, e.Message, "Dispose Trigger"); }, x => x.Dispose()); BatchCompleted("Current"); } return new IndexedItemsInfo(batch.HighestEtagBeforeFiltering) { ChangedDocs = sourceCount }; }); AddindexingPerformanceStat(new IndexingPerformanceStats { OutputCount = count, ItemsCount = sourceCount, InputCount = batch.Docs.Count, Duration = sw.Elapsed, Operation = "Index", Started = start, LoadDocumentCount = loadDocumentCount, LoadDocumentDurationMs = loadDocumentDuration }); logIndexing.Debug("Indexed {0} documents for {1}", count, indexId); }
public StreamQueryContent(HttpRequestMessage req, QueryActions.DatabaseQueryOperation queryOp, IStorageActionsAccessor accessor, CancellationTimeout timeout, Action <string> contentTypeSetter) { headers = CurrentOperationContext.Headers.Value; user = CurrentOperationContext.User.Value; this.req = req; this.queryOp = queryOp; this.accessor = accessor; _timeout = timeout; outputContentTypeSetter = contentTypeSetter; }
protected override Task GetApplicableTask(IStorageActionsAccessor actions) { return null; }
private int ProcessBatch(AbstractViewGenerator viewGenerator, List <object> currentDocumentResults, string currentKey, HashSet <ReduceKeyAndBucket> changes, IStorageActionsAccessor actions, IDictionary <string, int> statsPerKey, Stopwatch reduceDuringMapLinqExecution, Stopwatch putMappedResultsDuration, Stopwatch convertToRavenJObjectDuration) { if (currentKey == null || currentDocumentResults.Count == 0) { return(0); } var old = CurrentIndexingScope.Current; try { CurrentIndexingScope.Current = null; if (logIndexing.IsDebugEnabled) { var sb = new StringBuilder() .AppendFormat("Index {0} for document {1} resulted in:", PublicName, currentKey) .AppendLine(); foreach (var currentDocumentResult in currentDocumentResults) { sb.AppendLine(JsonConvert.SerializeObject(currentDocumentResult)); } logIndexing.Debug(sb.ToString()); } int count = 0; var results = RobustEnumerationReduceDuringMapPhase(currentDocumentResults.GetEnumerator(), viewGenerator.ReduceDefinition, reduceDuringMapLinqExecution); foreach (var doc in results) { count++; var reduceValue = viewGenerator.GroupByExtraction(doc); if (reduceValue == null) { logIndexing.Debug("Field {0} is used as the reduce key and cannot be null, skipping document {1}", viewGenerator.GroupByExtraction, currentKey); continue; } string reduceKey = ReduceKeyToString(reduceValue); RavenJObject data; using (StopwatchScope.For(convertToRavenJObjectDuration)) { data = GetMappedData(doc); } if (logIndexing.IsDebugEnabled) { logIndexing.Debug("Index {0} for document {1} resulted in ({2}): {3}", PublicName, currentKey, reduceKey, data); } using (StopwatchScope.For(putMappedResultsDuration)) { actions.MapReduce.PutMappedResult(indexId, currentKey, reduceKey, data); } statsPerKey[reduceKey] = statsPerKey.GetOrDefault(reduceKey) + 1; actions.General.MaybePulseTransaction(); changes.Add(new ReduceKeyAndBucket(IndexingUtil.MapBucket(currentKey), reduceKey)); } return(count); } finally { CurrentIndexingScope.Current = old; } }