Source to Fundamental Data

Every fundamental data object can be traced to its original source through a set of transactions (See Transactions ). Each transaction produces preliminary data that is also stored in the database. The chain of transactions and the chain of preliminary data produced by each transaction gives a trace of how the original source data was transformed to produce the final fundamental data. Associated with this chain of transactions is a unique keyword,  chosen by the user and should give an indication of the data that is being read in. The preliminary data created by the initial transactions are united by this keyword within the database and facilitates the search for prerequisite data. This label is only for the preliminary data and the final interpreted data is stored using the collection label of the fundamental data set of this data type. The fundamental data used for calculations can be the result of several chains of transactions.

Typically, the source of fundamental data starts with a formatted file that was created based on literature values. JThermodynamicsCloud is based on earlier versions of the software (see Section Evolution) and their established data formats. Typically, each file would have an entire set of data that was introduced by some literature collection. For example, the information in each table Benson rules found in the appendix of Benson's book can be found translated to tables. These original files originate, for example, from the THERGAS software implementation. However, there have been improvements on these values, either bringing more accuracy or additional functional groups, that can be additions to the fundamental data used in the calculation. The data from the publications would be transcribed to the proper data format. The first transaction on the way to creating fundamental data involves reading these (text) files. The transaction reads and stores this file. Having the file in storage facilitates the ultimate connection between a piece of fundamental data and its original source. In addition to reading the file, the transaction can also include supplementary data, for example, bibliographic references (web sites or publications). This bibliographic record also follows the data to the fundamental data. 

Since the text file has more than one piece of fundamental data, the file is parsed into blocks, where each block represents one piece of fundamental data. This is treated as an extra transaction to isolate the two processes of parsing and interpreting. If either one fails, then trouble-shooting is isolated.

The final step of interpretation of the fundamental data block stores the data within the accumulated collection label of a given set of fundamental data of the given type. The new data can be added to a set that already has data objects. However, if some of the data in the new data set is the same, as recognized by the unique name, then the set of data to be replaced is moved to a unique directory and the new data is stored in the current fundamental set. Both the location of the moved replaced data and the new data is stored in the transaction object for reference. The general principle here is that no data is deleted or lost. This procedure enhances accountability and traceability by keeping all data, but can also be used to reverse the transaction if it is deemed necessary.