1. Assessment in an English for Engineering Course: A Case Study
Santosh Mahapatra, BITS Pilani Hyderabad Campus
Santosh Mahapatra, BITS Pilani Hyderabad Campus
Abstract
The paper aims to present a case study of how assessment is being carried out in an English language course meant for master’s level engineering students at BITS Pilani Hyderabad Campus. Designed for students who need support in terms of English language skills, this course is spread across a semester and comprises academic language skills. The course has four assessment components, two of which are formative in nature. A completely context-specific approach and a variety of innovative methods make assessment in the course unique. Data for the study were collected through evaluation of question papers, interview with students, a survey of students’ opinions and an electronic portfolio-based analysis of students’ performance during the course. The findings of the study indicate that the use of engineering context and a variety of assessment methods contributes to the effectiveness of assessments in the course and is valued by students.
1. Background
Unlike other technical institutions in India, BITS Pilani provides courses in academic English to its students pursuing their B. Tech., M. Tech. and Ph. D. degrees. At M. Tech. level, all students are required to write a diagnostic test and out of more than 230 students, students who need support with academic language skills are identified and directed to complete a need-based course. The course includes components such as essay, article and dissertation writing, presentation skills, listening to lectures and note-taking, CV and cover letter writing, technical proposal writing, reading strategies, etc. The assessment in this course is guided by principles of classroom assessment (Angelo & Cross, 1988; Genesee &Upshur, 1996) which emphasises use of formative, authentic, performance-based, learner-centred and teacher-based assessments. Driven by these principles and the employment of technology, the assessments in this course can be claimed to be effective. This paper presents an analysis and evaluation of how assessment of students’ progress is carried out during the course. Considering that this model is replicable, it can be tried out in similar contexts.
2. Assessment of English for Engineers
According to Douglas (2000), two factors separate assessment of general language from that of specific language: ‘authenticity of task and the interaction between language knowledge and specific purpose content knowledge’ (p. 2). These two factors can be taken care of if assessment tasks are based on target situation/s and are authentic in nature (Goli-Kies, Hall and Moore, 2016). In case of English for Science and Technology (EST), which the present study deals with, as pointed out by Starfield (2016), the focus is often on “form at the cost of meaning and communication” (p. 151). This is indeed a caveat. Moreover, as there is no compulsory standardized test in EST/ESP in the country, teachers have an opportunity to develop classroom assessments, which are more learner- and learning-friendly (Angelo & Cross, 1993) and can focus more on meaning and communication. Portfolio assessment, self-assessment, peer-assessment, quizzes, checklists, etc. are commonly used for classroom assessment purposes (Rea-Dickins, 2008). However, these are not commonly reported in EST/ESP assessment literature, especially in literature based on studies carried out in India.
If some of basic principles of assessment such as validity, authenticity and practicality are applied to methods of classroom assessment, assessment in EST classrooms should ideally concentrate on “interactional proficiency” (Kunnan, 2018), i. e., interactions in contextually relevant settings, and employ portfolio, self and peer assessment for tracing students’ progress. In addition, attempts should be made to assess students’ performance in all specific skills which may have been taught in the classroom and integrate ICT tools to make assessments more effective.
3. Methodology
As the study aimed to obtain detailed and qualitative information about the assessments in the course, a case study approach was adopted. The case selection was purposive in nature and the researcher had access to M. Tech. students who were part of an English language course offered by the institute. Around 50 students were part of the course and all of them had their own laptop and access to internet. The course comprised mainly academic language skills and also some employability skills. The assessment in the course comprised one summative and two formative components. The weighting for each component was decided by the course instructor who also happened to be the researcher.
The data were collected through the following methods:
i) Evaluation of question papers/assignments: Question papers and assignments were assessed by 3 experienced and trained ESP professionals with the help of a set of criteria adapted from Mahapatra (2018). The criteria are presented in the form of a table below.
Criteria
Scale
1 2 3 4 5
Clearly stating the objective/s of the assessment
Assessing what it sets out to assess
Integration into classroom teaching
Providing information about students’ ability to use language in specific engineering contexts
Pointing out problems in students’ progress
Use of simulated real life situations in which language is used by students
Use of descriptive assessment criteria/rubrics
Use of an effective feedback mechanism
Integration of technology
Use of assessments that can be carried out in the given without much problem
ii) Interview with students: A semi-structured interview was conducted with 10 students to elicit information about their experience with the assessment on the course.
iii) A questionnaire survey was conducted to obtain students’ opinions about different aspects of assessment in the course.
iv) An electronic portfolio containing scanned samples of students’ performance in various skill areas was maintained to trace changes in students’ performance during the course.
4. Results
The data for the study was collected for a period of an academic semester. It was found that the assessments used in the EST course were to a great extent valid, authentic, relevant and they had a positive impact on students’ performance. The average rating of the assessment tasks, which is presented in the following figure, clearly indicates mostly high rating for the entire set of criteria.
Though two criteria focusing on rubrics and feedback respectively have slightly higher scores, i. e., 4.65, the lowest score was recorded for the criterion concerning validity which stands at 4.48 and is 0.08 lower than the average rating score. The difference is almost negligible.
The results of interview and questionnaire survey data indicate that most students found the tasks challenging and useful. For some, however, a semester was not adequate for learning so many new components and thus, they could not perform well in the assessments. The use of scientific contexts was appreciated by everyone but many of them wanted the tasks to be more specific to their disciplines.
All students were happy with self and peer assessment, use of electronic portfolio for tracing progress, open book examination system and employment of descriptive rubrics. Some of them wanted more individual attention from the teacher for discussing their performance in the examination.
The electronic portfolios focused on two skill areas: writing and speaking. While writing comprised research reports, presentation skills were focused under speaking. No effort was made to quantify the portfolio assessment data. However, the analysis of the data gave rise to some clear patterns. Students made good progress in terms of organization and following domain-specific academic norms in writing but there was very little change in use of sub-technical vocabulary and appropriate sentence structures.
5. Discussion and conclusion
The results were analysed to arrive at some major patterns in the data. Then, the patterns were compared with the claims made in the existing research.
5.1 Appropriate classroom assessment tasks
Most of the assessment tasks used in the course are to a great extent valid, authentic, practical and relevant. An important feature of the tasks is that they balance the context and language well as suggested by Douglas (2000). Authenticity of the tasks is maintained through the employment of scientific/engineering situations, which is a requirement in EST (Goli-Kies, Hall and Moore, 2016, Kunnan, 2018). Such tasks when used for classroom assessment and integrated with teaching often help in developing learners’ specific language skills. In fact, ‘meaning’ takes the centre stage instead of ‘form’ (Starfield, 2016). Self and peer assessment, portfolio assessment and rubrics which form the core of classroom assessment (Rea-Dickins, 2008) are also used in the course. This kind of approach is not very common in India where in most engineering institutions, courses are not often based on the principles of ESP/EST and examination drives classroom teaching.
5.2 Positive impact on student learning
The assessments had a positive impact on learning. Almost all the students displayed progress in writing research reports and making academic presentations. This is, in a way, an indication of the positive impact of classroom assessment on learning (Angelo & Cross, 1993). Students’ engagement with the course content can be improve through the employment of appropriate classroom assessment tasks. This is what the assessments in the course did. Of course, it is important to maintain the difficulty level of tasks so that tasks cater to the needs of students with different proficiency levels.
5.3 Integration of technology
The integration of technology into assessments made it convenient for both the teacher and the students. Web tools helped in promoting collaboration among students, sharing feedback and carrying out both oral and written assessments. Platforms such as Google Doc, Voicethread, Whatsapp, etc. contributed to the impact of the course and improved accessibility.
6. Conclusion
It was a small-scale study of the classroom assessments carried out in the EST course taught at BITS Pilani Hyderabad Campus. It was found that the assessments were not only different from the way assessments carried out in other similar contexts in India but also valid, authentic, practical and relevant. Aided by the use of technology, these assessments contributed to student learning. It is important and necessary to try these assessment strategies in similar engineering contexts in India. However, lack of training in ESP can hinder any such ambition on the part of the teacher. So, care must be taken to train teachers, who teach in such contexts, in principles and practices of ESP.
Angelo, T. A. and K. P. Cross. Classroom assessment techniques: a handbook for college teachers. San Fancisco: Jossey-Bass, 1993. Print.
Douglas, Dan. Assessing Languages for Specific Purposes. Cambridge: Cambridge University Press, 2000. Print.
Gollin-Kies, Sandra, David R Hall and Stephen H Moore. Language for Specific Purposes. Springer, 2016. Print.
Kunnan, Anthony. “Assessing Languages for Specific Purposes.” April 2018. Association of Language Testers in Europe Web site. Electronic. 25 September 2018.
Mahapatra, Santosh. “Impact of Teachers' Classroom Language Assessment Literacy (CLAL) on Students' Performance.” 11 April 2018. IATEFL TEASIG Web site. Electronic. 20 September 2018.
Rea-Dickins, Pauline. “Classroom-based language assessment.” Shohamy, Elana and Nancy H. Hornberger. Encyclopedia of language and education. New York: Springer, 2008. 257-272. Print.
Starfield, Sue. “English for Specific Purposes.” Hall, Graham. The Routledge Handbook of English Language Teaching. Oxon: Routledge, 2016. 150-163. Print.