public LogItemDetail ReadLogItemDetail([FromUri] string appenderName, [FromUri] string partitionKey, [FromUri] string rowKey) { LogTableEntity logTableEntity = TableService.Instance.ReadLogTableEntity(appenderName, partitionKey, rowKey); if (logTableEntity != null) { return((LogItemDetail)logTableEntity); } return(null); }
public void Log <TState>(LogLevel logLevel, EventId eventId, TState state, Exception exception, Func <TState, Exception, string> formatter) { var partitionKey = DateTime.UtcNow.ToString(_partitionKeyFormat); var logTableEntry = new LogTableEntity(partitionKey, Guid.NewGuid().ToString(), formatter.Invoke(state, exception), exception, _categoryName, logLevel.ToString()); try { _logEventQueue.Add(logTableEntry); } catch (ObjectDisposedException) { // can happen if the provider is disposed, no need to do anything since the system is probably going down anyway } }
/// <summary> /// Persists log4net logging events into Azure table storage /// </summary> /// <param name="appenderName"></param> /// <param name="loggingEvents">collection of log4net logging events to persist into Azure table storage</param> internal void CreateLogTableEntities(string appenderName, LoggingEvent[] loggingEvents) { CloudTable cloudTable = this.GetCloudTable(appenderName); if (cloudTable != null) { // all loggingEvents converted to LogTableEntity objects (required for indexing method - avoids duplicate constructions) List <LogTableEntity> logTableEntities = new List <LogTableEntity>(); // group by logging event date & hour - each group equates to an Azure table partition key // (all items in the same Azure table batch operation must use the same partition key) foreach (IGrouping <DateTime, LoggingEvent> groupedLoggingEvents in loggingEvents.GroupBy(x => x.TimeStamp.Date.AddHours(x.TimeStamp.Hour))) { DateTime dateHour = groupedLoggingEvents.Key; // date for current grouping // set partition key for this batch of inserts string partitionKey = string.Format("{0:D19}", DateTime.MaxValue.Ticks - dateHour.Ticks + 1); // ensure 100 or less items are inserted per Azure table batch insert operation foreach (IEnumerable <LoggingEvent> batchLoggingEvents in groupedLoggingEvents.Batch(100)) { TableBatchOperation tableBatchOperation = new TableBatchOperation(); foreach (LoggingEvent loggingEvent in batchLoggingEvents) { // logic in constructor also parses out dictionary items into properties LogTableEntity logTableEntity = new LogTableEntity(partitionKey, loggingEvent); // add to collection for indexing later logTableEntities.Add(logTableEntity); tableBatchOperation.Insert(logTableEntity); } cloudTable.ExecuteBatch(tableBatchOperation); } } IndexService.Instance.Process(appenderName, logTableEntities); } }
public Task Start() { return(Task.Run(async() => { var initialized = false; // Now - reference = duration since last flush DateTime?referenceTimestamp = null; var bufferCount = 0; var operations = new List <TableOperation>(); var takeTimeout = TimeSpan.FromMilliseconds(_bufferTimeout.TotalMilliseconds / 10); while (!_logEventQueue.IsCompleted) { var eventTaken = false; LogTableEntity logEvent = null; try { eventTaken = _logEventQueue.TryTake(out logEvent, takeTimeout); } catch (ObjectDisposedException) { // can happen if the provider is disposed, no need to do anything since the system is probably going down anyway break; } if (eventTaken) { referenceTimestamp = DateTime.UtcNow; bufferCount++; operations.Add(TableOperation.Insert(logEvent)); } if (DateTime.UtcNow - referenceTimestamp < _bufferTimeout && bufferCount < _bufferSize) { continue; } if (!initialized) { await _cloudTable.CreateIfNotExistsAsync(); initialized = true; } // All operations in a batch must have same partitionKey var groupedByPartitionKey = operations .GroupBy(operation => operation.Entity.PartitionKey); foreach (var tableOperations in groupedByPartitionKey) { var batchOperation = new TableBatchOperation(); foreach (var operation in tableOperations) { batchOperation.Add(operation); } await _cloudTable.ExecuteBatchAsync(batchOperation); } referenceTimestamp = null; bufferCount = 0; operations.Clear(); } })); }
/// <summary> /// Queries Azure table storage until results are returned or a timeout is thown /// </summary> /// <param name="appenderName">name of the log4net appender</param> /// <param name="partitionKey">a partional key to begin search from (can be null)</param> /// <param name="rowKey">a row key to begin search from (can be null)</param> /// <param name="hostName">host name to filter on</param> /// <param name="loggerName">logger name to filter on</param> /// <param name="minLevel">logger level to filter</param> /// <param name="message">message text to filter</param> /// <param name="sessionId">session id to filter</param> /// <returns></returns> internal LogTableEntity[] ReadLogTableEntities(string appenderName, string partitionKey, string rowKey, string hostName, string loggerName, Level minLevel, string message, string sessionId) { LogTableEntity[] logTableEntities = new LogTableEntity[] { }; // default return value CloudTable cloudTable = this.GetCloudTable(appenderName); if (cloudTable == null) { return(logTableEntities); } int take = 50; // default take for Azure query bool hostNameWildcardFiltering = !string.IsNullOrWhiteSpace(hostName) && !IndexService.Instance.GetMachineNames(appenderName).Any(x => x == hostName); bool loggerNameWildcardFiltering = !string.IsNullOrWhiteSpace(loggerName) && !IndexService.Instance.GetLoggerNames(appenderName).Any(x => x == loggerName); // local filtering function applied to returned Azure table results Func <LogTableEntity, bool> customFiltering = (x) => { return(true); }; // default empty method (no custom filtering performed) // check to see if custom filtering (in c#) is required in addition to the Azure query if (hostNameWildcardFiltering || loggerNameWildcardFiltering || !string.IsNullOrWhiteSpace(message)) // message filtering always done in c# { customFiltering = (x) => { return((string.IsNullOrWhiteSpace(hostName) || x.log4net_HostName != null && x.log4net_HostName.IndexOf(hostName, StringComparison.InvariantCultureIgnoreCase) > -1) && (string.IsNullOrWhiteSpace(loggerName) || x.LoggerName != null && x.LoggerName.IndexOf(loggerName, StringComparison.InvariantCultureIgnoreCase) > -1) && (string.IsNullOrWhiteSpace(message) || x.Message != null && x.Message.IndexOf(message, StringComparison.InvariantCultureIgnoreCase) > -1)); }; // increase take, to account for customFiltering further reducing dataset take = 1000; } // build the Azure table query TableQuery <LogTableEntity> tableQuery = new TableQuery <LogTableEntity>() .Select(new string[] { // reduce data fields returned from Azure "Level", "LoggerName", "Message", "EventTimeStamp", "log4net_HostName" }); if (!string.IsNullOrWhiteSpace(partitionKey)) { tableQuery.AndWhere(TableQuery.GenerateFilterCondition("PartitionKey", QueryComparisons.GreaterThanOrEqual, partitionKey)); } if (!string.IsNullOrWhiteSpace(rowKey)) { tableQuery.AndWhere(TableQuery.GenerateFilterCondition("RowKey", QueryComparisons.GreaterThan, rowKey)); } if (minLevel != Level.DEBUG) { // a number comparrison would be better, but log4net level and enum level don't match switch (minLevel) { case Level.INFO: // show all except debug tableQuery.AndWhere(TableQuery.GenerateFilterCondition("Level", QueryComparisons.NotEqual, Level.DEBUG.ToString())); break; case Level.WARN: // show all except debug and info tableQuery.AndWhere(TableQuery.CombineFilters( TableQuery.GenerateFilterCondition("Level", QueryComparisons.NotEqual, Level.DEBUG.ToString()), TableOperators.And, TableQuery.GenerateFilterCondition("Level", QueryComparisons.NotEqual, Level.INFO.ToString()))); break; case Level.ERROR: // show if error or fatal tableQuery.AndWhere(TableQuery.CombineFilters( TableQuery.GenerateFilterCondition("Level", QueryComparisons.Equal, Level.ERROR.ToString()), TableOperators.Or, TableQuery.GenerateFilterCondition("Level", QueryComparisons.Equal, Level.FATAL.ToString()))); break; case Level.FATAL: // show fatal only tableQuery.AndWhere(TableQuery.GenerateFilterCondition("Level", QueryComparisons.Equal, Level.FATAL.ToString())); break; } } if (!loggerNameWildcardFiltering && !string.IsNullOrWhiteSpace(loggerName)) { tableQuery.AndWhere(TableQuery.GenerateFilterCondition("LoggerName", QueryComparisons.Equal, loggerName)); } else { // HACK: ensure index entities are not returned tableQuery.AndWhere(TableQuery.GenerateFilterCondition("LoggerName", QueryComparisons.NotEqual, string.Empty)); } if (!hostNameWildcardFiltering && !string.IsNullOrWhiteSpace(hostName)) { tableQuery.AndWhere(TableQuery.GenerateFilterCondition("log4net_HostName", QueryComparisons.Equal, hostName)); } if (!string.IsNullOrWhiteSpace(sessionId)) { tableQuery.AndWhere(TableQuery.GenerateFilterCondition("sessionId", QueryComparisons.Equal, sessionId)); } tableQuery.Take(take); TableContinuationToken tableContinuationToken = null; TableQuerySegment <LogTableEntity> response; do { // single Azure table storage requsest response = cloudTable.ExecuteQuerySegmented(tableQuery, tableContinuationToken); // blocking logTableEntities = response.Results.Where(x => customFiltering(x)).ToArray(); tableContinuationToken = response.ContinuationToken; } while (!logTableEntities.Any() && tableContinuationToken != null); return(logTableEntities); }