private static byte[] GetBackupData(BackupDataLogKey logKey, int dataSize) { if (_backupCache != null) { return(_backupCache[logKey]); #region Under study: Bigger than Sop.DataBlock buffering! //byte[] r = (byte[])BackupCache[LogKey]; //if (r == null) //{ // if (!BackupCache.CacheCollection.MovePrevious()) // BackupCache.CacheCollection.MoveFirst(); // for (int i = 0; i < 3; i++) // { // BackupDataLogKey lk = (BackupDataLogKey)BackupCache.CacheCollection.CurrentKey; // if (Collections.OnDisk.Algorithm._region.FirstWithinSecond(Address1, Size1, Address2, Size2)) // return true; // else if (Collections.OnDisk.Algorithm._region.Intersect(Address1, Size1, Address2, Size2) // } //} //if (r != null) //{ // if (r.Length >= DataSize) // return r; //} #endregion } return(null); }
public int Compare(T x, T y) { BackupDataLogKey xKey = x; BackupDataLogKey yKey = y; int r = String.CompareOrdinal(xKey.SourceFilename, yKey.SourceFilename); return(r == 0 ? xKey.SourceDataAddress.CompareTo(yKey.SourceDataAddress) : r); }
private static void SetBackupData(BackupDataLogKey logKey, byte[] data, bool forceCached) { if (_backupCache != null && data.Length == (int)logKey.DataBlockSize && (forceCached || _backupCache.Count < _backupCache.MinCapacity)) { _backupCache[logKey] = data; } }
/// <summary> /// Backup Data of a certain disk region onto the transaction log file /// </summary> internal void BackupData(List <KeyValuePair <RecordKey, Region> > dataRegions, ConcurrentIOPoolManager readPool, ConcurrentIOPoolManager writePool) { LogTracer.Verbose("BackupData: Start for Thread {0}.", Thread.CurrentThread.ManagedThreadId); foreach (KeyValuePair <RecordKey, Region> dataRegion in dataRegions) { RecordKey key = dataRegion.Key; Region region = dataRegion.Value; var f = (OnDisk.File.IFile)Server.GetFile(key.Filename); string fFilename = key.Filename; //** foreach disk area in region, copy it to transaction file foreach (KeyValuePair <long, int> area in region) { // short circuit if IO exception was detected. if (readPool.AsyncThreadException != null) { throw readPool.AsyncThreadException; } if (writePool.AsyncThreadException != null) { throw writePool.AsyncThreadException; } var logKey = new BackupDataLogKey(); logKey.SourceFilename = f == null ? fFilename : f.Filename; logKey.SourceDataAddress = area.Key; IEnumerable <KeyValuePair <BackupDataLogKey, BackupDataLogValue> > intersectingLogs; long mergedBlockStartAddress, mergedBlockSize; // todo: optimize LogCollection locking! //LogCollection.Locker.Lock(); LogTracer.Verbose("Transactin.BackupData: Thread {0}, Locking LogCollection, count {1}.", Thread.CurrentThread.ManagedThreadId, LogCollection.Count); bool isIntersectingLogs = GetIntersectingLogs(logKey, area.Value, out intersectingLogs, out mergedBlockStartAddress, out mergedBlockSize); if (isIntersectingLogs) { BackupDataWithIntersection(intersectingLogs, logKey, area, f, fFilename, readPool, writePool, key); } else { BackupDataWithNoIntersection(intersectingLogs, logKey, area, f, fFilename, readPool, writePool, key); } LogTracer.Verbose("Transactin.BackupData: Thread {0}, Unlocking LogCollection, count {1}.", Thread.CurrentThread.ManagedThreadId, LogCollection.Count); //LogCollection.Locker.Unlock(); } } }
private void BackupDataWithNoIntersection( IEnumerable <KeyValuePair <BackupDataLogKey, BackupDataLogValue> > intersectingLogs, BackupDataLogKey logKey, KeyValuePair <long, int> area, OnDisk.File.IFile f, string fFilename, ConcurrentIOPoolManager readPool, ConcurrentIOPoolManager writePool, RecordKey key) { string systemBackupFilename = Server.Path + DataBackupFilename; int size = area.Value; key.Address = area.Key; // no intersection nor mergeable logs, add new log! backup and log the data area ConcurrentIOData reader = f != null ? readPool.GetInstance(f, size) : readPool.GetInstance(fFilename, null, size); ConcurrentIOData writer = writePool.GetInstance(systemBackupFilename, (TransactionRoot)Root); if (reader == null || writer == null) { throw new SopException("This program has a bug! 'didn't get reader or writer from Async IO Pool."); } LogTracer.Verbose("BackupDataWithNoIntersection: Start for Thread {0}.", Thread.CurrentThread.ManagedThreadId); var logValue = new BackupDataLogValue(); logValue.DataSize = size; logValue.TransactionId = Id; logValue.BackupFileHandle = GetLogBackupFileHandle(DataBackupFilename); // return the current backup file size and grow it to make room for data to be backed up... logValue.BackupDataAddress = GrowBackupFile(size, writer.FileStream); // save a record of the backed up data.. LogCollection.Add(logKey, logValue); // log after data was backed up!! Sop.VoidFunc logBackedupData = () => { UpdateLogger.LogLine("{0}{1}:{2} to {3}:{4} Size={5}", BackupFromToken, f != null ? f.Filename : fFilename, area.Key, DataBackupFilename, logValue.BackupDataAddress, size); }; writer.FileStream.Seek(logValue.BackupDataAddress, SeekOrigin.Begin, true); reader.FileStream.Seek(area.Key, SeekOrigin.Begin, true); reader.FileStream.BeginRead( reader.Buffer, 0, size, ReadCallback, new object[] { new[] { reader, writer }, true, logKey, logBackedupData }); }
private bool IsInUpdatedBlocks(CollectionOnDisk collection, long blockAddress, int blockSize) { BackupDataLogKey logKey = new BackupDataLogKey(); logKey.SourceFilename = collection.File.Filename; logKey.SourceDataAddress = blockAddress; IEnumerable <KeyValuePair <BackupDataLogKey, BackupDataLogValue> > intersectingLogs; long mergedBlockStartAddress, mergedBlockSize; return(GetIntersectingLogs(logKey, blockSize, out intersectingLogs, out mergedBlockStartAddress, out mergedBlockSize) && intersectingLogs == null); }
internal static bool RegisterRecycle( Collections.Generic.ISortedDictionary <RecordKey, long> addStore, Collections.Generic.ISortedDictionary <RecordKey, long> recycleStore, CollectionOnDisk collection, long blockAddress, int blockSize) { var key = CreateKey(collection, blockAddress); //if (InStore(key, blockSize, recycleStore)) // return false; BackupDataLogKey logKey = new BackupDataLogKey(); logKey.SourceFilename = collection.File.Filename; logKey.SourceDataAddress = blockAddress; IEnumerable <KeyValuePair <BackupDataLogKey, BackupDataLogValue> > intersectingLogs; long mergedBlockStartAddress, mergedBlockSize; if (GetIntersectingLogs(logKey, blockSize, out intersectingLogs, out mergedBlockStartAddress, out mergedBlockSize)) { if (intersectingLogs == null) { RegisterAdd(addStore, null, null, collection, blockAddress, blockSize, false); return(true); } // get area(s) outside each intersecting segment and back it up... Region newRegion = new Region(blockAddress, blockSize); bool wasIntersected = false; foreach (KeyValuePair <BackupDataLogKey, BackupDataLogValue> entry in intersectingLogs) { if (newRegion.Subtract(entry.Key.SourceDataAddress, entry.Value.DataSize)) { wasIntersected = true; } } if (wasIntersected) { foreach (KeyValuePair <long, int> newArea in newRegion) { RegisterAdd(addStore, null, null, collection, newArea.Key, newArea.Value, false); } return(true); } } RegisterAdd(addStore, null, null, collection, blockAddress, blockSize, false); return(true); }
/// <summary> /// Read Block from log backup file /// </summary> /// <param name="collection"></param> /// <param name="dataAddress"></param> /// <param name="getForRemoval"></param> /// <param name="readMetaInfoOnly"></param> /// <returns></returns> public static byte[] ReadBlockFromBackup(OnDisk.Algorithm.Collection.ICollectionOnDisk collection, long dataAddress, bool getForRemoval, bool readMetaInfoOnly) { if (!IsGlobal && LogCollection != null) { var LogKey = new BackupDataLogKey(); LogKey.SourceFilename = collection.File.Filename; LogKey.SourceDataAddress = dataAddress; //** NOTE: study(!)- optimize BackupCache use to be 1 instance per Transaction 90; byte[] blockBuffer = GetBackupData(LogKey, (int)collection.DataBlockSize); if (blockBuffer == null) { var lv = LogCollection[LogKey]; if (lv != null) { //** read from disk the block and encache it.. int BlockSize; string fname = GetLogBackupFilename(lv.BackupFileHandle); var fs = BackupStreams[fname]; if (fs == null) { string fname2 = GetLogBackupFilename(lv.BackupFileHandle); if (collection.File.Server != null) { fname2 = collection.File.Server.NormalizePath(fname2); } fs = File.UnbufferedOpen(fname2, FileAccess.Read, (int)collection.DataBlockSize, out BlockSize); BackupStreams[fname] = fs; } blockBuffer = new byte[(int)collection.DataBlockSize]; fs.Seek(dataAddress, SeekOrigin.Begin); if (fs.Read(blockBuffer, 0, blockBuffer.Length) <= 0) { throw new SopException("Read failed on Transaction.ReadBlockFromBackup."); } SetBackupData(LogKey, blockBuffer, true); } } return(blockBuffer); } return(null); }
/// <summary> /// Backup Data of a certain disk region onto the transaction log file /// </summary> internal void BackupData(List <KeyValuePair <RecordKey, Region> > dataRegions, ConcurrentIOPoolManager readPool, ConcurrentIOPoolManager writePool) { foreach (KeyValuePair <RecordKey, Region> dataRegion in dataRegions) { RecordKey key = dataRegion.Key; Region region = dataRegion.Value; var f = (OnDisk.File.IFile)Server.GetFile(key.Filename); string fFilename = key.Filename; //** foreach disk area in region, copy it to transaction file foreach (KeyValuePair <long, int> area in region) { var logKey = new BackupDataLogKey(); logKey.SourceFilename = f == null ? fFilename : f.Filename; logKey.SourceDataAddress = area.Key; IEnumerable <KeyValuePair <BackupDataLogKey, BackupDataLogValue> > intersectingLogs; long mergedBlockStartAddress, mergedBlockSize; if (GetIntersectingLogs(logKey, area.Value, out intersectingLogs, out mergedBlockStartAddress, out mergedBlockSize)) { BackupDataWithIntersection(intersectingLogs, logKey, area, f, fFilename, readPool, writePool, key); } else { BackupDataWithNoIntersection(intersectingLogs, logKey, area, f, fFilename, readPool, writePool, key); } } //** Detect and Merge backed up blocks to minimize growth of Items stored in LogCollection, //** impact of not merging is a slower rollback of an unfinished pending transaction in previous run. //DetectAndMergeBlocks(); } }
internal static bool GetIntersectingLogs(BackupDataLogKey logKey, int logKeySize, out IEnumerable <KeyValuePair <BackupDataLogKey, BackupDataLogValue> > target, out long startMergedBlockAddress, out long mergedBlockSize) { target = null; startMergedBlockAddress = mergedBlockSize = 0; var l = new List <KeyValuePair <BackupDataLogKey, BackupDataLogValue> >(); if (!LogCollection.Search(logKey)) { if (!LogCollection.MovePrevious()) { if (!LogCollection.MoveFirst()) { return(false); } } } long address1 = logKey.SourceDataAddress; int size1 = logKeySize; startMergedBlockAddress = address1; mergedBlockSize = size1; bool intersected = false; for (int i = 0; i < 3; i++) { var key = LogCollection.CurrentKey; var value = LogCollection.CurrentValue; if (logKey.SourceFilename == key.SourceFilename) { long address2 = key.SourceDataAddress; int size2 = value.DataSize; if (RegionLogic.FirstWithinSecond(address1, size1, address2, size2)) { return(true); } if (RegionLogic.Intersect(address1, size1, address2, size2)) { l.Add(new KeyValuePair <BackupDataLogKey, BackupDataLogValue>(key, value)); i = 0; intersected = true; if (address2 < startMergedBlockAddress) { startMergedBlockAddress = address2; } if (startMergedBlockAddress + mergedBlockSize < address2 + size2) { long l2 = address2 + size2 - startMergedBlockAddress; if (l2 >= int.MaxValue) { break; } mergedBlockSize = l2; } } else if (intersected) { break; } } else { break; } if (!LogCollection.MoveNext()) { break; } } if (l.Count > 0) { target = l; return(true); } return(false); }
private void BackupDataWithNoIntersection( IEnumerable <KeyValuePair <BackupDataLogKey, BackupDataLogValue> > intersectingLogs, BackupDataLogKey logKey, KeyValuePair <long, int> area, OnDisk.File.IFile f, string fFilename, ConcurrentIOPoolManager readPool, ConcurrentIOPoolManager writePool, RecordKey key) { string systemBackupFilename = Server.Path + DataBackupFilename; int size = area.Value; key.Address = area.Key; //if (RegisterAdd(_addStore, null, null, key, size, false)) //{ // Logger.LogLine("Extending, skipping Backup..."); // return; //} //** no intersection nor mergeable logs, add new log! //** backup and log the data area ConcurrentIOData reader = f != null ? readPool.GetInstance(f, size) : readPool.GetInstance(fFilename, null, size); ConcurrentIOData writer = writePool.GetInstance(systemBackupFilename, (TransactionRoot)Root, size); if (reader == null || writer == null) { return; } var logValue = new BackupDataLogValue(); logValue.DataSize = size; logValue.TransactionId = Id; //** todo: can we remove this block: //long readerFileSize = reader.FileStream.Length; //if (area.Key + size > readerFileSize) //{ // int appendSize = (int)(area.Key + size - readerFileSize); // key.Address = readerFileSize; // RegisterAdd(_addStore, null, null, key, appendSize, false); // size = (int)(readerFileSize - area.Key); // logValue.DataSize = size; // reader.Buffer = new byte[size]; //} //** reader.FileStream.Seek(area.Key, SeekOrigin.Begin); logValue.BackupFileHandle = GetLogBackupFileHandle(DataBackupFilename); logValue.BackupDataAddress = writer.FileStream.Seek(0, SeekOrigin.End); UpdateLogger.LogLine("{0}{1}:{2} to {3}:{4} Size={5}", BackupFromToken, f != null ? f.Filename : fFilename, area.Key, DataBackupFilename, logValue.BackupDataAddress, size); // resize target file to accomodate data to be copied. writer.FileStream.Seek(size, SeekOrigin.End); writer.FileStream.Seek(logValue.BackupDataAddress, SeekOrigin.Begin); reader.FileStream.BeginRead( reader.Buffer, 0, size, ReadCallback, new object[] { new[] { reader, writer }, true, logKey } ); //** save a record of the backed up data.. LogCollection.Add(logKey, logValue); }
private void BackupDataWithIntersection( IEnumerable <KeyValuePair <BackupDataLogKey, BackupDataLogValue> > intersectingLogs, BackupDataLogKey logKey, KeyValuePair <long, int> area, OnDisk.File.IFile f, string fFilename, ConcurrentIOPoolManager readPool, ConcurrentIOPoolManager writePool, RecordKey key ) { if (intersectingLogs == null) { //** process conflicts with other trans... ProcessTransactionConflicts(logKey, area.Value); //** area is within an already backed up area (intersectingLogs == null), do nothing... return; } //** get area(s) outside each intersecting segment and back it up... var newRegion = new Region(area.Key, area.Value); bool wasIntersected = false; foreach (KeyValuePair <BackupDataLogKey, BackupDataLogValue> entry in intersectingLogs) { //** process conflicts with other trans... ProcessTransactionConflicts(entry.Key, entry.Value.DataSize); if (newRegion.Subtract(entry.Key.SourceDataAddress, entry.Value.DataSize)) { wasIntersected = true; } } //** copy if (!wasIntersected) { return; } foreach (KeyValuePair <long, int> newArea in newRegion) { var logKey2 = new BackupDataLogKey(); logKey2.SourceFilename = logKey.SourceFilename; logKey2.SourceDataAddress = newArea.Key; var logValue = new BackupDataLogValue(); logValue.DataSize = newArea.Value; logValue.TransactionId = Id; int newSize = newArea.Value; key.Address = newArea.Key; //if (RegisterAdd(_addStore, null, null, key, newArea.Value, false)) // return; logValue.BackupFileHandle = GetLogBackupFileHandle(DataBackupFilename); ConcurrentIOData reader = f != null ? readPool.GetInstance(f, newArea.Value) : readPool.GetInstance(fFilename, null, newArea.Value); if (reader == null) { throw new InvalidOperationException("Can't get ConcurrentIOData from ReadPool"); } string systemBackupFilename = Server.Path + DataBackupFilename; ConcurrentIOData writer = writePool.GetInstance(systemBackupFilename, ((TransactionRoot)Root), newArea.Value); if (writer == null) { throw new InvalidOperationException("Can't get ConcurrentIOData from WritePool"); } logValue.BackupDataAddress = writer.FileStream.Seek(0, SeekOrigin.End); //** todo: can we remove this block: //long readerFileSize = reader.FileStream.Length; //if (newArea.Key + newArea.Value > readerFileSize) //{ // int appendSize = (int)(newArea.Key + newArea.Value - readerFileSize); // key.Address = readerFileSize; // RegisterAdd(_addStore, null, null, key, appendSize, false); // newSize = (int)(readerFileSize - newArea.Key); // logValue.DataSize = newSize; // reader.Buffer = new byte[newSize]; //} //** reader.FileStream.Seek(newArea.Key, SeekOrigin.Begin); UpdateLogger.LogLine( "{0}{1}:{2} to {3}:{4} Size={5}", BackupFromToken, logKey2.SourceFilename, logKey2.SourceDataAddress, DataBackupFilename, logValue.BackupDataAddress, newSize); // resize target file to accomodate data to be copied. writer.FileStream.Seek(newSize, SeekOrigin.End); writer.FileStream.Seek(logValue.BackupDataAddress, SeekOrigin.Begin); reader.FileStream.BeginRead( reader.Buffer, 0, newSize, ReadCallback, new object[] { new[] { reader, writer }, true, logKey2 } ); //** save a record of the backed up data.. LogCollection.Add(logKey2, logValue); } }
protected virtual void ProcessTransactionConflicts(BackupDataLogKey logKey, int logKeySize) { //** process conflicts with other trans and register }
private void BackupDataWithIntersection( IEnumerable <KeyValuePair <BackupDataLogKey, BackupDataLogValue> > intersectingLogs, BackupDataLogKey logKey, KeyValuePair <long, int> area, OnDisk.File.IFile f, string fFilename, ConcurrentIOPoolManager readPool, ConcurrentIOPoolManager writePool, RecordKey key ) { if (intersectingLogs == null) { // process conflicts with other trans... //ProcessTransactionConflicts(logKey, area.Value); // area is within an already backed up area (intersectingLogs == null), do nothing... return; } LogTracer.Verbose("BackupDataWithIntersection: Start for Thread {0}.", Thread.CurrentThread.ManagedThreadId); // get area(s) outside each intersecting segment and back it up... var newRegion = new Region(area.Key, area.Value); #region for future implements... ? //bool wasIntersected = false; //foreach (KeyValuePair<BackupDataLogKey, BackupDataLogValue> entry in intersectingLogs) //{ // // process conflicts with other trans... // ProcessTransactionConflicts(entry.Key, entry.Value.DataSize); // if (newRegion.Subtract(entry.Key.SourceDataAddress, entry.Value.DataSize)) // wasIntersected = true; //} //if (!wasIntersected) return; #endregion // copy modified blocks to the transaction backup file. foreach (KeyValuePair <long, int> newArea in newRegion) { if (readPool.AsyncThreadException != null) { throw readPool.AsyncThreadException; } if (writePool.AsyncThreadException != null) { throw writePool.AsyncThreadException; } var logKey2 = new BackupDataLogKey(); logKey2.SourceFilename = logKey.SourceFilename; logKey2.SourceDataAddress = newArea.Key; var logValue = new BackupDataLogValue(); logValue.DataSize = newArea.Value; logValue.TransactionId = Id; int newSize = newArea.Value; key.Address = newArea.Key; //if (RegisterAdd(_addBlocksStore, null, null, key, newArea.Value, false)) // return; logValue.BackupFileHandle = GetLogBackupFileHandle(DataBackupFilename); ConcurrentIOData reader = f != null ? readPool.GetInstance(f, newArea.Value) : readPool.GetInstance(fFilename, null, newArea.Value); if (reader == null) { throw new InvalidOperationException("Can't get ConcurrentIOData from ReadPool"); } string systemBackupFilename = Server.Path + DataBackupFilename; ConcurrentIOData writer = writePool.GetInstance(systemBackupFilename, ((TransactionRoot)Root)); if (writer == null) { throw new InvalidOperationException("Can't get ConcurrentIOData from WritePool"); } // return the current backup file size and grow it to make room for data to be backed up... logValue.BackupDataAddress = GrowBackupFile(newSize, writer.FileStream); // save a record of the backed up data.. LogCollection.Add(logKey2, logValue); // prepare lambda expression to log after data was backed up!! Sop.VoidFunc logBackedupData = () => { UpdateLogger.LogLine( "{0}{1}:{2} to {3}:{4} Size={5}", BackupFromToken, logKey2.SourceFilename, logKey2.SourceDataAddress, DataBackupFilename, logValue.BackupDataAddress, newSize); }; writer.FileStream.Seek(logValue.BackupDataAddress, SeekOrigin.Begin, true); reader.FileStream.Seek(newArea.Key, SeekOrigin.Begin, true); reader.FileStream.BeginRead( reader.Buffer, 0, newSize, ReadCallback, new object[] { new[] { reader, writer }, true, logKey2, logBackedupData }); } }