Input Data/Processing Flow
The flow of data through the batch loader is illustrated below. Note the boxes in this diagram represent the processing stream for a record from an activity input as it passes through the process. Each of the boxes maps to a class from nxproclib.

Processing Description
- The Raw Record read from the input is passed into the DataLayoutProcessor. this processor is responsible for parsing the record based on the specification of the data layout and resolving all derived fields specified in the layout. Note derived fields are resolved on first reference so that fields that are unused by a particular mapping add no overhead to the processing.
- The DataMappingProcessor assembles an input set by requesting field values from the DataLayoutProcessor as specified in the Data Mapping. An Input set consists of lists of Groups and their text values, statically mapped metrics and their values, discovered metrics including their values and metric names, and the list of values specified to be saved as detail data
- The input set is passed to the BatchProcessor. This processor resolves the groups to their internal ids, creating new groups as needed. It also maintains accumulators for each unique group combination that accumulate metric values to both the daily and interval levels for the batch as a whole. As each record is processed, the detail data is passed off to the detail writer for saving to the mongodb detail collection.
- Once the final record has been processed, the commit process is initiated. This process ensures all detail records have been written and then takes the accumulated metric data and saves it to the daily and interval collections in mongdb. If the application is set to do aggregation on every batch, then aggregation will be called to further aggregate the data to a smaller set of groups