The research was conducted in two parts: A and B.
Part A quantitatively compared the performance of learners in the WCED Systemic Testing of Language and Mathematics from National Quintile (NQ) 1–3 schools which followed different LoLT models in the Western Cape. These were studied because it is within such schools that the LoLT models in respect of which this research intended to gain an understanding were typically found, that is, straight-for-English and early-exit transitional LoLT models, as well as where teachers and learners are isiXhosa L1 speakers.
Part B adopted a mixed-methods approach (McMillan & Schumacher, 2010, p. 11), where a sample of schools from Part A was identified from each of the different LoLT models and visited for the purpose of investigating how LoLT model formulation and implementation affected learner performance.
While Part A aimed to indicate which LoLT model was associated with higher learner performance, Part B aimed to explore the extent to which Part A’s findings reflected LoLT model implementation, as well as how this implementation affected performance.
Part A data was sourced from WCED records of Grade 3 and Grade 6 learner performance during annual systemic evaluations of learner proficiency in language and Mathematics. It was collected by WCED officials in 2012 and 2015 via the administration of standardised tests written by all learners in public schools throughout the Western Cape. This data was used because it provided a valid and reliable assessment of learner performance within the schools of interest to the research.
a) Population
Part A data was sampled to represent a population of SA schools characterised by being:
· Public primary schools within socio-economically disadvantaged contexts;
· Staffed by teachers who taught in a L2; and
· Attended by learners who learnt in a L2.
This population of schools, ‘Population A’, was relevant, because it was within such schools that the research aimed to find answers to the research problem: ‘How does LoLT affect learner performance?’
Finding these answers was valuable considering the size and demographic of learners who attended the population of schools of interest to the research in SA:
· 89,7% of black SA Grade 4–6 learners were reported by DBE (2010, p. 2) to be taught in a L2 in 2007, a percentage that has since increased (Taylor & Coetzee, 2013).
· 70,5% of black SA children lived in low income households in 2011 (Stats SA, 2013, p. 14)
· 75% of black SAs had not completed high school in 2011 (Stats SA, 2012, p. 39).
b) Sample
The sample of learner performance data analysed during Part A, ‘Sample A’, was determined by a non-probability purposive sampling strategy (McMillan & Schumacher, 2010, p. 138) in order to ensure that the data was from schools characteristically similar to the extent that valid and statistically significant comparisons were possible (Babbie & Mouton, 2005, p. 478).
Sample A selection criteria were determined by specifying required school characteristics to be representative of Population A, allowing research that responded to the research problem in a valid manner. Sample A schools met the following criteria:
1. Administration by the same provincial education department, in this case the WCED;
2. A primary school;
3. Categorisation as NQ 1, 2 or 3;
4. No-fee status;
5. Implementation of one of two LoLT models; either a straight-for-English model or a late-exit transitional model;
6. Attendance by learners who were predominantly (80%>) isiXhosa L1 speakers; and
7. Staffing by teachers who were predominantly isiXhosa L1 speakers.
The procedure followed to determine Sample A was to select schools that fulfilled the sampling criteria. During this, referring to the criteria above:
· Criterion no. 1: Satisfaction was a given as a result of all of the systemic testing data being from WCED schools.
· Information to apply criteria nos. 2 to 6 was readily available within WCED data.
· Only isiXhosa L1 learner performance data was sampled.
· Application of criterion no. 7 required schools to be evaluated individually.
Table 3.1: Sample sizes per learner performance data set
The final sample of data was limited by availability of schools meeting the sampling criteria and varied between data sets analysed. After sampling , as per criteria, data of schools identified as outliers was removed (McMillan & Schumacher, 2010, p. 165). The size of the resultant sample data sets is reflected above, see Table 3.1. The way in which outlying data was identified will be specified later in this chapter. Furthermore, the sizes of Grade 6 data sets were affected by the removal of schools that did not have corresponding 2012 and 2015 data sets available.
Sample A was representative of Population A because of the comparable nature of the demographics of learners attending schools included within the sample and population.
Instruments used to measure learner performance were tests of language and Mathematics written by learners.
Tests were composed of age-appropriate assessments items that measured learner proficiencies in language and Mathematics in a reliable and valid manner. Tests used were written in the LoLT of the learner: either English or isiXhosa.
Tests were standardised in various respects in order to allow a variety of valid comparisons of learner performance using the data produced (McMillan & Schumacher, 2010, p. 189):
· International comparisons. Test items were benchmarked against international assessment standards.
· Comparisons within the Western Cape. Only one version of tests was used for every learner in the province.
· Comparisons over time. Tests administered were standardised from year to year.
· Comparisons between grades. Test items were age-benchmarked.
· Comparisons between LoLT contexts. Language used in test items was standardised.
Owing to the standardised and controlled nature of the instrumentation employed during the WCED Systemic Testing of learner performance, the data resultant was eligible for use because it provided a valid indication of learner performance per LoLT context. This allowed valid comparisons of performance between LoLT contexts to be made.
Part A research design was quantitative and descriptive, and made use of a comparative research method (McMillan & Schumacher, 2010, p. 222). This method was used to investigate the relationship between LoLT and learner performance by examining whether performance differences existed between L1 and L2 LoLT contexts.
Language and Mathematics performance in schools where L1 LoLT was used during the Foundation Phase (switching to L2 LoLT from Grade 4 onwards) was compared to performance from schools where L2 LoLT was used during the Foundation Phase (and throughout subsequent grades) in two ways:
1. Cross-sectional comparison (Babbie & Mouton, 2005, p. 92). Sample A data was used to compare learner performance from different LoLT contexts at particular points in time.
2. Longitudinal comparison (Babbie & Mouton, 2005, p. 93). Sample A data was used to compare learner performance of roughly the same cohort of learners from different LoLT contexts over of period time.
a) Data collection
Data collection in Part A was facilitated by the WCED. The researcher’s role in ‘data collection’ was to obtain the required WCED Systemic Testing Language and Mathematics learner performance data for Sample A. This data was provided as an average of learner scores per school per learner performance data set.
WCED systemic tests were written by learners under circumstances controlled by examiners appointed officially by the WCED external to the schools. Performance data was captured from completed tests marked by examiners similarly appointed during a process subjected to moderation checks for quality assurance purposes.
b) Data analysis
Sample A performance data (averaged learners scores per school) was analysed in a comparative manner via a process of steps and calculations.
Cross-sectional comparisons were made in the following manner. For each data set:
1. Data was grouped by LoLT context of origin. Two groups resulted from this as only two LoLT contexts were included in the data:
a. Schools that used L1 LoLT during the Foundation Phase (switching to L2 LoLT from Grade 4 onwards) – hereafter referred to as ‘FP L1 LoLT schools’; and
b. Schools that used L2 LoLT during the Foundation Phase (and throughout subsequent grades) – hereafter referred to as ‘FP L2 LoLT schools’.
2. The average (mean) score for each group was calculated.
3. The standard deviation (SD) of scores from, as well as the standard error of the mean (SEM) of, each group was calculated.
4. The standard score (z-score) for each school in each group was calculated.
5. Schools with standard scores 3 or more standard deviations from the mean of their group were excluded, and all prior calculations were re-performed.
6. A measure of the degree to which the difference of scores from the two groups was statistically significant was calculated using an unpaired t-test (results expressed as p-values) (McMillan & Schumacher, 2010, p. 300).
7. The mean score from each group was compared to determine which group was associated with greater learner performance. The difference in learner performance between the groups was evaluated for whether it was statistically significant or not at a 95% confidence interval.
Longitudinal comparisons were similarly made. However, instead of comparing the performance of FP L1 LoLT schools to FP L2 LoLT schools on isolated occasions, the change in these groups’ performance over time was compared using 2012 Grade 3 data and 2015 Grade 6 data – reflecting the performance of approximately the same learner cohort. For each performance data set:
1. 2012 Grade 3 scores were matched to 2015 Grade 6 scores school by school. Any school score that could not be matched because of the absence of a corresponding score in either 2012 or 2015 was excluded. School scores that were determined to be outliers during prior cross-sectional comparisons were also excluded.
2. Matched scores were grouped by LoLT context of origin. Again, two groups resulted, as only two LoLT contexts were in the data: FP L1 LoLT and FP L2 LoLT schools.
3. The performance change between 2012 and 2015 was calculated school by school by subtracting each school’s 2012 score from its 2015 score.
4. The mean performance change for each group was calculated.
5. The standard deviation (SD) of performance changes from, as well as the standard error of the mean (SEM) of, each group was calculated.
6. The standard score (z-score) of performance change for each school in each group was calculated.
7. Schools with standard scores (of performance change) 3 or more standard deviations from the mean of their group were excluded, and all prior calculations were re-performed.
8. A measure of the degree to which the difference of performance changes from the two groups were statistically significant were calculated using an unpaired t-test (results expressed as p-values) (McMillan & Schumacher, 2010, p. 300).
9. The mean performance change from each group was compared in order to determine which group was associated with greater performance over time. The difference in performance over time between groups was evaluated for whether it was statistically significant or not at a 95% confidence interval.
The following findings from the analyses conducted will be presented in Chapter 4 and will make use of tables and figures.
Part B was a mixed-method approach, aimed at investigating the ways in which LoLT model formulation and implementation affected the learner performance data analysed during Part A.
Part B data was sourced during May and June 2016 by the researcher from among research participants (Grade 3 teachers, heads of departments [HODs] and principals working at sampled schools) using information forms, questionnaires, and interviews.
This data was sourced in order to develop an understanding of how LoLT policy was formulated and implemented by the role-players directly involved in doing so within the research context of interest. This understanding was developed to establish the degree to which and in what way(s) LoLT policy formulation and implementation affects performance.
a) Population
The population within which Part B was conducted (Population B) was chosen to sample schools which had previously been researched as a result of the research context of interest being located within this population.
Population B was composed of the schools included within Sample A of Part A. Population B, therefore, was equivalent to Sample A. The parameters of the schools within Population B were the same as for Population A; see Section 3.1.1(a).
Population B comprised of approximately 17 000 socio-economically disadvantaged isiXhosa L1 learners in approximately 130 NQ 1-3 (no-fee) schools.
b) Sample
The sample of schools researched during Part B, ‘Sample B’, was sampled using a non-probability purposive sampling strategy (Babbie & Mouton, 2005, p. 166). The sampling criteria employed aimed at identifying a sample that held the potential to produce findings that shed light on:
· How LoLT policy was formulated and implemented; as well as
· What effects LoLT policy formulation and implementation had on performance.
In lieu of this and the need for schools in Sample B to be valid contexts for comparative research, the sampling criteria employed were geared to result in a sample of schools that varied by LoLT model followed as well as by performance produced. As such, a school from Population B was eligible to be included in Sample B if it:
· Followed one of the two LoLT models implemented within Population B; either FP L1 LoLT or FP L2 LoLT; and.
· Produced learner performance considered to be of the best or of the worst among the schools within Population B following the same LoLT policy.
In addition to applying these criteria, the procedure that produced Sample B was guided by the need to include at least two schools from each LoLT contexts. The steps taken to determine Sample B were, therefore, the following:
1. Schools within Population B were grouped by LoLT model followed, resulting in two groups since all of the schools included within Population B followed one of two LoLT models, as mentioned previously.
2. The performance of each school was defined by the quartile that it fell into within each group. Grade 3 performance data from 2015 WCED Systemic Testing of Language was used for this purpose.
3. Using the aforementioned data, a minimum of one school was selected from each group that performed in the 1st quartile and a minimum of one school from each group was selected that performed in the 3rd or 4th quartile. This was done to vary the type of school visited by taking into account factors affecting performance.
From the sample of schools identified, several were visited by the researcher in order to invite the principals of these schools to participate in research. Of such principals who indicated their desire for their schools to participate in the research, five schools were selected to make up Sample B:
· Three FP L1 LoLT schools; and
· Two FP L2 LoLT schools.
Sample B was representative of Population B, but also of Population A, due to equivalence of learner and teacher demographic, socio-economic context, as well as of LoLT model at the schools included.
The fact that Sample B was representative of both Population A and B allowed the potential for the findings produced during Part B to hold relevance to similar schools within the Western Cape and throughout SA.
The instruments used during Part B, included in Appendices A, facilitated the collection of data that contributed to the research response to the research problem.
a) School Information Form
A form was employed to capture data pertaining to the nature of participant schools’ geographic and socio-economic context, learner body and teaching staff composition, educational inputs and outputs. Data captured by this form was used primarily to confirm the extent to which participant schools were comparable in terms of their ability to perform.
b) School Management Team Member and Teacher Information Forms
A second type of form was employed to collect school management team (SMT) member and teacher research participant data of a demographic and professional nature. Data collected by these forms was used to evaluate the extent to which the staff compositions of schools were comparably similar in terms of the degree to which they were able to facilitate performance via LoLT policy formulation and/or implementation.
c) Teacher Survey Questionnaires
Questionnaires were administered to research participants who taught, and were made up of close-ended items that provided respondents with a combination of dichotomous and comparative rating scale response options (McMillan & Schumacher, 2010, p. 198) to questions that probed how the LoLT affected teaching and learning in their classrooms. Six possible responses were provided per questionnaire item so that respondents could not respond in a neutral manner. The effect of LoLT on the following factors was surveyed:
· Teacher comfort and ability to teach using the LoLT;
· Teacher and learner LoLT extent of use;
· Teachers’ theoretical knowledge on using the LoLT, as well as classroom practices, in this regard; and
· Learner comfort, participation, and ability to learn using the LoLT.
The questionnaire gathered this data in order to comparatively explore how LoLT affected teaching and learning within FP L1 LoLT versus FP L2 LoLT schools according to teachers.
d) School Management Team Member and Teacher Interview Schedules
Standardised open-ended interview schedules (McMillan & Schumacher, 2010, p. 355) were employed face-to-face (Babbie & Mouton, 2005, p. 249) in order to gather data from participants on the nature of their school’s LoLT policy and their opinions of it. Questions probed each school’s LoLT policy specification, the manner in which LoLT policy was formulated and implemented, as well as the participants’ opinion on the suitability of the LoLT for performance.
The purpose of the interviews was therefore to systematically collect data on how LoLT policy affected performance within FP L1 LoLT and FP L2 LoLT schools, according to interviewees.
Interviews were structured in three parts:
1. Opening. This part established rapport, dealt with matters of consent and confidentiality, provided information of how the interview would be conducted, outlined the interview’s purpose, indicated interview duration, as well as invited participants to seek clarity by questioning any matter covered during the interview at any stage.
2. Body. This part questioned participants on the following topics pertaining to LoLT policy: context, formulation, implementation and reflection.
3. Conclusion. This part aimed to summarise the interview and maintain rapport, as well as indicate what action would follow. As such, the researcher:
a) Explained that the interview was concluding and asked the interviewees if there was anything else that they would like to discuss;
b) Thanked the interviewees for their participation and welcomed any further communication from them; and
c) Asked the interviewees if they would be interested in receiving a copy of the completed dissertation, taking note of the responses.
Part B’s instruments were relevant to the research as their use facilitated the collection of data that, when analysed, generated valid understandings of how LoLT affected learner performance.
Part B’s research design made use of a mixed-method approach. While information forms and survey questionnaire instruments were quantitative in their nature and interview schedules were qualitative, both types of instrument were geared toward exploring the issue central to the research.
Collecting the bulk of Part B’s data, the interviews were used to evaluate answers to questions relevant to the effect of LoLT on performance in detail. Due to the qualitative nature of the data collected, the research explored the subtle and complex experiences of teachers and learners using LoLT in order to understand how LoLT affected performance within the different LoLT contexts. This was done using comparative analyses where data from L1 LoLT contexts was compared to data from L2 LoLT contexts.
Part B findings could not be used for purposes of generalisation, but could provide a better understanding of the effect of LoLT on performance within contexts of schooling comparable to Population A and B.
a) Data collection
Numerous schools within Population B were visited and each school’s principal was invited to be a part of the study. Of the schools visited that agreed to participate, appointments were made with 5 principals setting a date and time when the researcher would return to their school in order to collect data.
Part B’s instruments were administered in the order numbered below. The researcher was available to provide clarity throughout this process.
1. Information Letter and Consent Form
At each school, prior to research tool administration, each participant was provided with an Information Letter and Consent Form with an invitation to participate in the research, which they read and signed. This document was discussed by participants and the researcher before they did so. The document:
· Invited the participant to participate voluntarily in the research;
· Explained what the purpose of the research was;
· Specified what participation in the research would entail;
· Detailed the terms and conditions of participants’ participation;
· Addressed issues of ethics and confidentiality; and
· Facilitated the participant’s signed acknowledgement of the project’s terms and conditions; and also
· Formalised the participant’s consent to participate in the research via the administration of instrumentation.
Completed and signed forms were filed and stored. A scan or photocopy of each completed and signed form was provided to each participant.
2. Information Forms
The School Information Form was completed for each school by the researcher and the principal of that school. Participants completed either the Teacher Information Form or the SMT Information Form, depending on their role within the school. Participants were not required to write their names on these forms – and could remain anonymous.
3. Teacher Survey Questionnaires
The Teacher Survey Questionnaires were administered individually and in private to participants who were Grade 3 teachers. Participants were allowed to remain anonymous and so were not required to write their names on the questionnaire. Participants were encouraged by the researcher to seek clarity should they require further understanding of what any particular questionnaire item meant. Completed questionnaires were returned to the researcher.
4. Interview Schedules
Participants were interviewed by the researcher using the SMT Member or Teacher Interview Schedules. Interviews were conducted in private and on a one-on-one basis. During interview facilitation, the researcher adhered to the script and questions in the standardised open-ended interview schedules. Interviews were audio recorded.
b) Data analysis
Part B’s data analysis was guided by a general inductive data analysis process (McMillan & Schumacher, 2010, p. 323): the data was organised; organised data was placed into segments; segmented data was coded; observations were described; data was categorised; and patterns of data were developed as they emerged by using comparative research techniques. How the data was analysed is discussed in relation to each of the instruments used, below.
1. School Information Form
School information data from the sample schools was tabulated and evaluated against Sample B sampling criteria in order to confirm the comparability. The categories of such data included:
a) Teachers’ and learners’ home language;
b) The LoLT used in the Grades 1 to 3;
c) Class size; and
d) NQ rating.
Additionally, each school’s WCED systemic performance in language testing for the years 2012 to 2015 was tabulated and compared. In addition, to supplement this data, the socio-economic status (SES) of the municipal ward within which the school was located was compared, using the SES index for each area provided by local government (Western Cape Government Provincial Treasury, 2013, p. 30).
The findings generated from data collected using the School Information Form were used during discussions of findings.
2. Teacher and School Management Team Member Information Forms
The data collected by the Teacher and SMT Member Information forms was coded by item. The frequency of each response to each item was counted. The frequency count of each response to each item was tabulated and represented graphically using pie charts. The data processed in this way was used to compare the characteristics of teacher and SMT member research participants within L1 LoLT contexts to those within L2 LoLT contexts. This was done in order to reconfirm the comparability of the schools in Sample B.
3. Teacher Survey Questionnaire
The data collected by using the survey questionnaire was used to compare how LoLT affected the nature of teaching and learning in L1 versus L2 contexts. The items that the survey questionnaire was comprised of were coded and grouped by theme. The frequency of responses to each survey item was tabulated and represented as pie charts. Findings from the questionnaire are included within Chapter 4.
4. Interview Schedules
Audio recordings of the interviews conducted were transcribed word for word, resulting in 155 pages of interview transcriptions, approximately 65 000 words in length. These transcriptions were e-mailed to interviewees for their input and verification of accuracy.
During analysis, transcriptions were coded by item, theme and theme section (McMillan & Schumacher, 2010, p. 370) using the same codes used during the analysis of the data collected during the survey questionnaire. Coded transcriptions of interviews were then processed by a word-frequency counter that ranked the words and codes of the interview transcriptions from most to least frequently occurring. This information was tabulated by item, theme, and theme section, resulting in a document that provided various insights into the interview data collected, including the prevalence of each:
· Item among all items;
· Item within each theme;
· Item within each theme section;
· Theme among all themes;
· Theme within each theme section; and
· Theme section amongst all theme sections.
These insights were used to structure as well as prioritise the presentation order of findings by prevalence in the next chapter. Following this structure and order, responses from interviews on items of significance from L1 LoLT schools were compared to those from L2 LoLT schools in order to investigate how LoLT affected the nature and quality of teaching and learning within the two LoLT contexts. Participants were quoted in order to illustrate findings. Tables and figures were also used.
Various potential limitations to the extent that research findings produced during Part A and Part B were valid. These are noted, below.
During Part A, the following potential limitations were identified as threats to the validity of Sample A as consisting of comparable units:
· Inaccurate demarcation of learners’ home language within schools captured by WCED data resulting from learners’ parents / guardians falsely specifying learners’ home language as isiXhosa.
· Inaccurate evaluation of teachers’ home languages during sampling.
Relatedly, the validity of comparison made was potentially threatened by:
· Unaccounted existence of variables within comparison groups that affected performance resulting in the diminished validity of comparisons made
· Changes to learner cohorts whose performance was examined longitudinally.
Regarding the instrumentation used in data collection. The presence of test item irregularities within tests used, test administration as well as test marking and result capture process limitations potentially threatened the validity of data generated.
The dangers associated with the use of averages when data was analysed during Part A was noted as potentially limiting the validity of conclusions drawn.
During Part A, the use of Sample B for valid research was limited by its size. It was limited by the practicability of research application as facilitated by a single researcher. Additionally, the unknown existence of factors affecting learner performance within Sample B schools, beyond those investigated by the research related to LoLT formulation and implementation, could possibly have undermined the validity of research findings, as such factors were not able to have been taken into account.
The valid use of Part B instrumentation was limited by the extent to which data provided by participants was inaccurate. Inaccuracy of data provided by research participants may have been caused by, among other factors:
· Research participants’ inability to understand the language used by the instruments;
· Research participants’ human error completing instruments;
· Research participants’ inability to articulate responses to items within instruments; and
· Negative effects resulting from the unavoidable presence of the researcher during the data collection process social desirability bias.
The validity of Part B findings were potentially limited by researcher’s personal bias, limited skills, and idiosyncrasies.
The research accorded with:
· The UCT Code for Research Involving Human Subjects (UCT, 2012);
· The Statement of Values for the University of Cape Town and its Members (UCT, 2001);
· UCT statues and policies.
In view of this accordance, permission to conduct the planned research was sought from, and approved by, the University of Cape Town’s Research Ethics Committee, the WCED, and the participants. See Appendices B for the documentation that facilitated the required permissions.