The type of data that is collected will largely dictate the type of analysis that is done – if there are lots of numbers you will want to crunch them in a spreadsheet or heavyweight data analysis tool. If there is a lot of text you will try and read it all and then wonder how to make sense of it.
Having gone to the effort of identifying evaluation questions, appropriate indicators and then gathering data, it is very important to exercise some quality controls regarding the collation, accuracy and security of the data. Remember to use as simple a technique as possible to collate and organise the data – use a spreadsheet rather than a database (unless you are or have access to data analyst). Check the data for consistency and accuracy as it is entered, preferably by someone other than the person doing the data entry. Use check sums and other data validation techniques where relevant.
With quantitative data it is better to wait until you have an appropriate sample size before trying to draw early inferences or conclusions. Qualitative data by contrast benefits from a gather and code approach where each response will add new insights to your analysis and understanding. But qualitative data is notoriously difficult to analyse even if you have access to software such as Nvivo or concept mapping tools. A coding frame will help you to structure the analysis and identify further areas for investigation.
A coding frame is a simple table of tags or labels and their definitions in context. These codes are used to assign meaning to the descriptive information gathered through the data collection. They are usually attached to different sized ‘chunks’ such as words, phrases, sentences and paragraphs of text. They can also be applied to other media such as pictures, photographs, audio files and video clips.
The analysis of evaluation data is not simply about reporting findings and letting the results speak for themselves. The key difference from conventional research is the value element – the results need to be interpreted in terms of context and the stakeholders’ views. This does not mean compromising findings by skewing results to take account of vested interests but it does require an integrated and balanced approach. When stakeholders agree that the conclusions drawn are justified they will be more inclined to make use of the evaluation results. This may be complicated by differing views but the early involvement of the stakeholders in the evaluation and the inclusion of their views should help to facilitate a consensus being reached.
Categorising data in terms of key topics of interest and emerging themes is an essential element of qualitative analysis. In addition to simple descriptive statistical techniques, stratification (for example demographic variables of interest such as age, gender, subject area and institutional type) can help to contextualise quantitative findings.
There may be various opportunities for comparison of findings both within the project and in relation to other similar areas. For example, a new tool can be piloted with different cohorts to ascertain usage variables. These results can then be compared both with each other and also other related (external) studies. You may have gathered baseline data earlier in the evaluation as an aid to eliciting change following implementation of a tool or technique. Therefore a comparison can be made between the starting position and the new state following the activity.
An obvious, but sometimes forgotten, comparison that is always worth making is between intended outcomes and actual outcomes. Did the project achieve what it set out to do and what lessons have been learned? Did the resulting outputs achieve the intended outcomes or did something else happen? In development projects it is as valuable to learn from what did not work as it is from what did. This may be regarded as failure where something has either gone wrong, failed to meet expectations or produced undesirable consequences.