Super Archeological Layering
In modern discussions about software systems and the analysis of those systems (much of these discussions are related to security problems), there is a discussion about the the data (and all of the data in particular) and how all this data fits together and how it is used. Security depends on our understanding of all the relationship between the data entities of the system. To me the discussion is always rummaging though the territory of Formation Data Context without having a formal methodology to do it.
Formation Data Context is a concept relating how the data and data relationships are formed in the system. These relationships need to be understood so that we can manipulate the data and relationships, and secure the data in programming the computer and have decent predictions about what is happening and what the results will be.
In hermeneutics there is a process called form analysis or analysis of form. The idea is to study what forms the textual information came through as it was being used and developed. Was this saved as history, was it used a some kind of liturgy religious or secular (approaching the king or the judge), and was it used in common meetings of people? Various texts could have grown through these forms. A particularly important consideration is the reason that this text was in written form and not just verbally past down. In earlier times only a few could read and write so text was special. It is significant to know when and why this text was put in written form.
Now we need to carry this analysis to the digital form; for both controlled security and smooth transitions. Why is this data being digitized? Then, what forms will this data be part of in its digital use? And, also, how do we secure the data for our purposes? Knowing these provides a better understanding of what users are using this data for. So the programmer can understand these uses from the beginning; rather than tacking them on later with much extra work. The important difference here is that we are looking to define details of all phases of the process. The original form analysis was looking to define, in particular, the “first use” of the form. The digital process uses the concepts of language acton developed by Habermas. Simply put, the data forms are developed and defined based on their usefulness. This is augmented by some archeological layering per Foucault.
In the real world the data or information tends to be related to real things: temperature, clouds, rain, miles traveled, dollars spent, etc. There is a reality associated with the data. The data within the computer can be associated with such realities that are outside the computer but inside the computer the relationship between the real and the data is more distant.
Data comes into a computer system through a user interface. This the first form of the data. It arrives as some kind of stream of information from a user or a device. The original form often is quite different from the form needed in the application. The changes needed for the input may appear simple, but the process must consider all possible combinations that might appear in the stream.
Within the computer the data has four basic characteristics: type, value, address, and function. The data has a type (which is extremely important in digital information because the type defines how the data can be processed), say here, an integer. The data has a value, say 76. The fact that this is degrees or speed or height is actually a separate data item.
In the computer an important characteristic of data is the address, where the data is “kept”. When we first worked with computer data, the importance of the “address” of the data was less understood. But knowing the address is the only way of finding the data in the computer. Often the address is “held” in the name given to the site or place of the data by the programmer. This makes finding the data “easy.” But there are more complex ways to keep track of the address of the data that at first seem elusive but make finding and using large amounts of data easier for the computer.
As we use a lot of data in a given process it is of advantage to have that data together in some kind of form. The form can be a list, a table, page, index, or an object. Such a form allows us to move or reference the data that is needed together. Once it is referenced as part of a collection we can deal with the individual items more quickly and smoothly in the process development. Such structures are necessary to have workable and efficient data processes. Devising such forms are part of developing efficient and effective data processes. Analysis of form from a hermeneutic approach allows us to develop a more complete understanding of how the forms are associated and thus how they can be developed and used more effectively.
Object oriented design establishes a structure for data that is meant to be more in tune with real events and activities. The use of inheritance, encapsulation, overloading, and polymorphism creates an object that has personality. The functions of the structured design methodology could not be understood as a thing. These were functions. An object can be understood as a thing. This allows a more native recognition of object capabilities and the ability to predict the outcome on reuse of the object. This methodology allows a more hermeneutic approach rather than pure functional decomposition
As we develop our Formation Data Context analysis we need to tie the data together through archeological layering. Though this may not be a legacy system there are legacies in the process that is being analyzed. And as the system is analyzed the relationships between data will be more accurately placed using archeological layering than more structured methodologies. And the definition of data and data structures must fit the concepts of Habermas; that is usefulness (pragmatics). But the usefulness is not just inside the systems. Users of the system are part of the pragmatics.
Dr. Jerome Heath
Google Books: Hermeneutics in Agile System Development
Seven Covers