8 Appendices‎ > ‎

Appendix G – Using ‘Compendium’

As described in Section 3.2 some of the significant elements of GTM include data analysis, coding (of different types) categorisation and memoing. There are several software applications which have been designed to support these kinds of tasks which come under the general heading of CAQDAS (computer assisted qualitative data analysis software). Proprietry titles in this field include nVivo, Atlas Ti, N6, Kwalitan, MAXqda, but these applications tend to be rather expensive and whilst they may be freely available on campus, that isn’t particularly convenient, given the extended periods that are likely to be needed when undertaking data analysis of this nature. Whilst casting around for a free alternative, I came across Compendium, an application not specifically designed for GTM, but offering several of the features that the CAQDAS Networking project advises a CAQDAS application should have:
  • Content searching tools
  • Linking tools
  • Coding tools
  • Query tools
  • Writing and annotation tools
  • Mapping or networking tools
Only a brief induction was needed to have the software up and running and structuring ideas. I used the application in two ways:
  1. To begin to plan and organise the structure of my dissertation, particularly during background reading which would ultimately contribute to the literature review. This phase also allowed me to develop greater facility with the application before I needed it for the more crucial analysis phase.
  2. During analysis as a data store within which incoming data could be coded, categorised, linked and annotated.
It is regarding this second phase that GTM began in earnest and the following observations refer to that period.


Here we see an example of the Compendium map summarising the interview with O. After the interview, comments made by participants/respondents were entered onto the map, although not verbatim (further details in Section 4.1). Images can be entered onto the map as nodes, so here we can see some of the images which were introduced to stimulate discussion and the comments which arose from them. Each yellow node on the map represents a comment made by the respondent and the substance of that comment can be entered into the body of node, to be revealed later simply by hovering over the asterisk:


The metadata which is appended to the comment can take several forms. The node symbol itself can be ascribed meaning, though the default portfolio of symbols that accompany the package is quite narrow. The portfolio can be extended manually, though I chose not to do this, using symbols simply to distinguish between respondents comments, my additional questions and memos (analytical and procedural) The process of analysis and categorisation comes after data entry, so I didn’t feel it appropriate to attempt to impose meaning at such an early stage. The node can also be named which might lend itself to coding, though I felt that a comment from a respondent might be coded in different ways and the naming option didn’t easily permit that. Instead I used the name simply to provide a quick reference phrase I could use when viewing the whole map to identify potential areas of interest or interlinking. Instead for coding I used the tags feature which allows multiple tags, or in my case ‘codes’ to be attached to each comment. Once again these can be viewed by hovering as shown here
(I chose to indicate in vivo codes by starting them with lower-case characters – ‘learning new things’ in this example)

Nodes in any concept map are joined by links which I found useful both to indicate a particular thread or train in the discussion, then subsequently more analytically to indicate comments which might be conceptually linked. Each link can also be annotated with a brief comment which can serve as a reminder why the link was created or to comment on its significance.

Rather than transcribe all the comments before commencing analysis, if any observations sprang to mind, I added a memo node, linking it to the data which precipitated the idea. Each memo can also be tagged by coding if appropriate. Data nodes were tagged/coded both at the point of creation and subsequently during analysis.

As the data began to accumulate, analysis proceeded and categories began to emerge, it became necessary to create links from one map to another in order to indicate themes common with other respondents. Compendium provides the facility to do just that using reference nodes which link one map with another. It is also possible to link out completely by making the reference links standard weblinks, so that if necessary, data from respondents can be linked with external research.
compendium tags view

As the number of codes increases, they can be tracked using the tags view, however when I began to need to examine the codes on a more abstract level or offer some up to become categories, this feature proved somewhat limiting. It is possible to create and test different structures using folders within the tags view, but shifting codes between folders to try out tentative categories or themes proved rather onerous, which is why at this point I switched to Lino.it as described in Section 4.1


To what extent then did I find Compendium supported the different elements of GTM mentioned in Section 3.2?
  • It facilitates simultaneous analysis of data as it is transcribed.
  • It allows constant comparison of data with codes and concepts which are beginning to emerge.
  • Navigation between different strands of data from different respondents is straightforward and codes are stored in a common location, so building theoretical sensitivity becomes easier.
  • Memoing which informs the methodology and more importantly the analytical process is easily embedded with the data which generated it.
Where Compendium didn’t work as well for me was when I needed to view adopt a more abstract view of the data and wanted to see the bigger picture as I attempted to work more conceptually and build theory. This might have been because I chose to put each respondent’s data in a different map. Had they been all in the same map, that overview might have been possible, though given the amount of data, I suspect the resulting map would have been too complex and convoluted. At this point then I needed an additional tool to achieve that. But maybe that might also have been the case with other CAQDAS tools?

For me then and the workflow I found efficient, Compendium delivered around 80% of what I needed. Unfortunately the remaining 20% was a rather significant aspect.