analysis of mathematical on-line assessment systems: final report

1. Overview

The analysis of on-line assessment systems comprises three sub tasks leading to an overall output report: O1/A1 Needs analysis and data gathering, O1/A2 Reporting of mathematical on-line assessment systems, O1/A3 Pedagogical analysis for on-line assessment of mathematics. Underpinning this package of work is the educational needs assessment to support any pedagogical research, design, development and testing.

A needs assessment is a systematic study of the state of knowledge, ability, interest or attitude of a defined audience or group involving a particular subject [1]. The salient outputs from a needs assessment [2,3,4,5] are information relating to: Impact, Approaches, Awareness, Outcomes, Demand, and Credibility. The process involves direct and indirect methods of assessment and considers all stakeholders in the design and delivery of the programme of Learning.

Further information on the processes of the Need Analysis:

1.1. Direct Assessment

A study of perceptions of the assessment processes in relation to on-line learning and assessment of engineering undergraduates was conducted during academic years 2015/2016/2017. The student bases feeding into this study includes undergraduate engineering students and business students from Ireland, Poland, Portugal, Estonia, Finland and Romania. Selected outputs from this quantitative study were disseminated to a wide audience through conference papers and journal papers. The data from this study will feed directly into the needs assessment and is expanded to include engineering undergraduate students in Spain within this project.

1.2. Indirect Assessment

This component of the programme involves the collection of data from the expert group of academics present within the project. The expert group will feed into the needs assessment to aid 1) the verification of the project, 2) demonstration of how the needs will be addressed, and 3) the description of the model.

2. Introduction

The following report analyses the situation regarding undergraduate engineering and business mathematical education in the partner countries: Estonia, Ireland, Poland, Portugal, Romania and Spain. To this end data has been gathered from the partner universities:

The study programs that benefit from the mathematics classes taught span a wide range of fields: from architecture and civil engineering to electronical and mechanical engineering and accounting and marketing.

All of the partner universities have 60 ECTS/year programmes at level 6. The credit points for the mathematical topics in the first year of study vary for each partner institution from 10 to 28 ECTS (Table 1).

Assessment of mathematics within a technology enhanced learning environment, or e-assessment, may be conducted using a variety of assessment methods. E-assessment includes the use of a computer as part of any assessment-related activity [6, p.88] – the definition of computer includes any computer device such as mobile phone, tablet, laptop, desktop computer. The degree and level of sophistication within the e-assessment may be determined by ‘drivers’ as diverse as institutional requirements through to personal choice by academics. E-assessment has developed particularly over the last twenty years to include a high degree of sophistication for mathematical questioning in the form of Computer Algebra Systems (CAS). It is recognised that the inherent embodiment of questioning and feedback is essential within any e-assessment system [2,7,8,9,10].

The opportunities afforded by e-assessment have provided scope for expansion and development of the students’ learning space [11]. The traditional learning space of the fixed bricks-and-mortar classroom has been transformed through the mediation of technology [12]. The transformation of the learning space relocates the student-teacher nexus within the continuum of physical-virtual spaces with an added temporal component. Cognizance of the shift in space must be recognized within a pedagogically sound curriculum with resultant re-imagining and consideration of the paradigm. Redecker and Johannessen [9] recognized this issue as a conflict between two assessment paradigms – computer-based testing versus embedded assessment [9, p.80]. The predicted movement towards personalised learning involving gamification, virtual worlds, simulation and peer assessment in the 4th generation of computer-based assessment, resulted in a call for a paradigm re-alignment in assessment. However, despite a considerable range of assessment formats on offer through 4th generation technologies, the traditional assessment paradigm is still paramount. A conceptual shift is required to re-imagine the model of assessment to ensure it meets the expectations of all stakeholders in the learning process.

2.1. Assessment

Assessment is the process of forming a judgement about a student and assigning an outcome based on the judgement. The outcome of the judgement must be fair within the context of the assessment, and may be made visible to the student in the form of feedback or a score. As an instrument, the assessment may occur at the unit level or in the form of an overall multi-level instrument and depending on the form of assessment the output will be employed for different purposes.

2.1.1. Forms of Assessment

The form of the assessment may be determined by the purpose of the assessment. The purpose of the assessment is outlined by [13] as:

  • Diagnostic

Diagnostic feedback is used to inform a student about past learning and provides a model for the student to prepare for future learning.

  • Formative

Formative feedback relates to the performance of the student within a particular recent context. It may inform the student about potential learning gaps or deficiencies.

  • Summative

Summative assessment considers the culmination of a student’s work. The normal output is a score or mark used to denote competence, skill, ability, knowledge when students wish to progress.

  • Evaluative

Evaluative assessment measures the effectiveness of the teaching or the assessment of the students. This type of assessment may contribute to quality audits.

2.1.2. Teacher’s Role

The role of the teacher may vary from the most fundamental naïve reader through to critical examiner guided by externally determined criteria [14]. Within the assessment process, the human teacher has ultimate responsibility for all decisions leading to the various outcomes. The meaning of assessment in the context of this study is the establishment of whether a student’s mathematical submission satisfies the properties specified by the teacher, or, if not, to what degree. Morgan and Watson [14] considered issues for equity in assessment and demonstrated how different assessors may arrive at different but equally valid assessment judgements. In doing so, concerns were highlighted that are still considered pertinent. It may not be possible to remove all forms of inequity in assessment however, insight and understanding through critical reflection of assessment practices, policies and procedures, has potential for mitigating the effects on students.

2.1.3. Assessment of Mathematics

The processes of structured thought and argument evident within mathematics allows engineering students to solve problems. Engineering problems are addressed through applied mathematics and statistics, and a typical expectation would be that the student:

  • Is able to identify and collate relevant information

  • Is able to model the information

  • Is able to apply the correct mathematical techniques and processes

  • Is able to correctly interpret results and present the results in an appropriate manner.

The action of determining if a student is able to solve a problem adequately, is gained through the claim that the assessor has gained knowledge of the student’s learned capacity [15]. The learned capacity of the student must be evidenced, and a link must exist to tie the claim of knowledge to the reality to which the assessor lays claim. The actions of the assessor are critical because not only is the assessor attempting to discover that which is true but also to do justice to the student. Inferential hazards must be minimized and a logical priority should be attached to determine exactly what it is that should be assessed. The claim of knowledge that someone has learned something can never be certain, and the claim that the learning gap has been reduced can never be made with complete security. There are three major considerations relevant to appraising an attempt to assess learned capacity: The truth of the knowledge claim; justice to the person assessed; Overall utility of the assessment exercise.

When teachers judge assessments by students, they interpret what they read using their professional judgement. The output of this judgement is an inference of what the students’ have achieved in the learning process and the teacher will indicate the success or otherwise of the activity. A teacher will use their collective resources as well as individual resources and these may include their [14]:

  • Personal knowledge of mathematics and the curriculum

  • Beliefs about the nature of mathematics, and how these relate to assessment

  • Expectations about how mathematics is communicated

  • Experience and impressions of students

  • Expectations of individual students

  • Linguistic skills and cultural background

  • Construction of knowledge about students’ mathematics

Issues may arise that give cause for concern in the assessment process and result in inequity, namely: Inconsistent application of standards; Systematic bias; Poorly designed tasks.

2.1.4. Application of standards

When different assessors assess the same work, it is necessary to apply standards in a consistent manner to remove any inequity. The determination of learned capacity by an individual assessor is affected by the resources that a particular teacher may bring to the assessment. A teacher may have more experience and better linguistic skills than partner teachers and arrive at concluding judgements using different approaches. In mathematics it is possible to arrive at the same conclusion even though the assessors have made different assessment judgements within the context of the assessment. Through shared training and rating, the potential for inequity may be reduced but it is still subject to the vagaries and nature of human judgement.

2.1.5. Systematic Bias

Standard training programmes for the rating and assessment of students’ learned capacity exercises address issues such as concept, procedure, and modelling. Assessment where the nature of the activity involves different cultures, races, languages, and socio-economic groupings, may fall foul of inherent disadvantages. A student completing an assessment where the language of the assessment is not the students’ first language may experience inequity and the assessor may not be in a position to take this factor into account. Differences in socio-economic grouping through variable access to resources may result in the failure by some students to participate fully or appear to participate fully, resulting in a lowering of teachers’ expectations. Race and culture may result in a devaluing of the performances of certain student minority groups due to the manner in which an assessment has been conducted. The reliability of the assessments may be beyond reproach; the status of equity may be such that inequity exists due to the systematic manner in which an assessment is judged.

2.1.6. Poorly Designed Tasks

A pivot on which the assessment operates, is the design of the task that a teacher wishes to conduct to determine if the learning gap has been reduced, and the subsequent students’ learned capacity has increased. The task must be viewed by all in a consistent manner and the design must be in alignment with the knowledge sought by the teacher. If the task is not designed correctly then the determination of learning is not possible. The use of poor language within a question, leading to ambiguity, has potential to result in a failure by students to address the true nature of the assessment. Poor design affects students and also those conducting the assessment of the exercise. It may not be possible to compare the students’ mathematical approaches, practices and procedures in a consistent manner; the meaning of the assessment may be misinterpreted. Inequity may arise in situations where poor design causes ambiguity.

2.2. On-line or E-assessment

The onus is now on the student. The student must be the active learner – self-disciplined, motivated, and learning through discovery. [16]

Just like any other educational technology, the role of on-line technology is to facilitate teaching and promote learning. This raises the question as to how the assessor knows that the requirements for teaching and learning are being fulfilled, when instruction is delivered on-line.

E-assessment has developed within the technology enhanced learning locale to a more central position with improved potential for interaction in the learning process [17]. The use of advanced techniques allows the assessment to be broken down into steps and this allows targeted feedback to take place [2]. The potential to embed a variety of assessment techniques generates interest among teachers to enable students to check their levels of understanding across a range of topics. To maximize the efficacy of e-assessment, [18] developed a set of parameters for engagement in the formative process:

  • Nature, frequency, role and function of feedback

  • Potential for self-regulation

  • Iteration

  • Scope for sharing outputs and ideas with peers

  • Focus on where the student is going

  • Length of the cycle

  • Potential for pedagogical modification

  • Scope for closing the gap

  • Contribution to future learning trajectories

  • Measurable attributes

  • Feedback for learning, places the student at the centre of the educational process and promotes the notion that the student is his/her own agent for learning [19].

2.3. Learning Environments in Higher Education

A vehicle employed to embed the student within the technology enabled learning environment in the Higher Education domain is the Virtual Learning Environment (VLE) or Learning Management System (LMS); another vehicle is the use of hosted websites. Such systems may be open source or proprietary and are increasingly being used to host material, deliver assessments, and in some cases automatically grade and provide feedback to the student [2]. To expand the capability of the assessment systems value may be added using tools supplied by externally hosted assessment systems.

Within the technology enabled, or mediated, learning environment it is important to maintain the concept of equity in assessment. Invalid or unfair judgements of students’ performances can have long lasting and far-reaching negative consequences for the students [14]. Teachers’ judgements can affect short-term behaviour towards a student and influence future interpretations of assessment. Teachers’ expectations of students may also contribute towards unfair decisions particularly so within systemic inequities such as access to internet off-campus [20, 21, 22]. Equity in assessment means that all students are given equal opportunities to reveal their achievements and that the instruments of assessment are not biased. The assessment process must be cognizant of cultural, behavioural and social contexts as unfamiliarity may introduce unintended barriers. Critical to this process of assessment, whether within or outside the technology environment, is the interpretation of the students’ performance.

Open-source or proprietary VLEs/LMS’s are increasingly being used to host and deliver on-line assessments. Most incorporate their own versions of assessment and assessment types and will typically include – Multiple Choice Questions (MCQ), Blank Filling, Multiple Answer, Calculated Entry, Formula Entry, Short Sentence, Free Text, Hotspot, and Word List.

Within the Higher Education domain, the main systems employed are Moodle, Blackboard, Desire2Learn, and Canvas [23]. Each has their own characteristics, but they all support hosting of documents, communication tools, basic assessment tools, grading tools and reporting tools. It is not within the remit of this report to describe each system in depth, rather the lens will be solely on assessment components.

2.4. e-Assessment of Mathematics

According to Hunt, cited in [2], approximately 90% of question types used within VLEs are selected response such as MCQ. Selected response systems overcome issues of data entry where students incorrectly enter an invalid or unrecognized parameter; this has been reported by students as a common barrier to progress [21].

Invalid entry is a particular issue in the assessment of mathematics using on-line systems when symbolic notation is required. The assessor, in using selective response, brings a sense of objectivity to the task of assessment when using selective response questioning, but must be careful that the result cannot be deduced from the options provided [24]. It may be possible for students to guess the answers, or use assessment savvy, or test-wise techniques. A major issue to be considered when using MCQ type assessments is authenticity. Problems in real-life are not solved through MCQs!

Authenticity of the assessment is fundamental to the learning process, combined with the requirement for objectivity the analysis of mathematics by e-assessment should imitate the actions of the human assessor. One avenue employed to address these requirements is the use of Computer Algebra Systems (CAS).

3. Needs Assessment Aims and Objectives

The Aims and Objectives for the Needs assessment:

  • To gather data from partner institutions for the purposes of conducting a needs analysis

  • Report on on-line assessment systems

  • Pedagogical analysis for on-line mathematics in engineering

  • Specification of needs

  • Provide Intellectual Input to the development of a pedagogical model for shared on-line teaching of mathematics

  • Identify barriers to learning in a shared teaching environment

The target audience of the Needs Assessment:

  1. Undergraduate students in engineering mathematics at higher educational institutions.

  2. Academic staff teaching and designing engineering mathematics in tertiary programs.

  3. Research academics in the areas of Technology Enhanced Learning and on-line learning.

  4. Academic staff at second level preparing students for third level study in engineering.

  5. Other educational institutions preparing engineering curricula.

Data was gathered from each partner institution for the purposes of conducting a needs analysis for a common shared on-line shared engineering mathematics pedagogy study leading to a shared 3-ECTS curriculum.

4. Methodology

This report is based on a review of current and available literature, surveys of higher education partners, surveys and interviews of students. Student interviewees were selected on a convenience basis to the researcher within the first-year undergraduate engineering student population. Surveys were issued on-line to students in each partner organisation. A review was conducted of proprietary on-line mathematics assessment systems. The overall objectives were (i) to map key shared curriculum areas, (ii) determine the perceptions of students for on-line assessment, (iii) determine the veracity of the rhetoric used by academics in the design of engineering mathematics programmes, (iv) identify gaps in the current pedagogy, and (v) identify areas of convergence or divergence in the design and delivery of the engineering mathematics programmes. The first aim, using objectives (i), (ii), (iii) and (iv), is to generate an understanding of existing practices within, between and across the partners. The second aim, using objective (v), provides the narrative on the pedagogical research, design and modelling of sound engineering mathematics programmes in the first year of study in higher education.

The methodology employed was initiated by informing all partners about the meaning of a Needs Analysis, the Objectives of the Needs Analysis and to provide guidance in Needs Analysis best practice.

4.1. Guide for Educational Needs Assessment

All partners were given a short presentation on the types of question to be addressed within the analysis, issues discussed were:

  • General Descriptive Data

  • Year 1 Curriculum

  • Expected and Intended Learning Outcomes

  • Assessment Mechanisms

  • Instructional Design

  • Skills/Competencies/Qualifications

  • Expected Mathematical/ICT/Literacy skills

  • Teacher Qualifications

  • Quanta – Number Students/Gender Ratio/Grading/etc

  • Quality Assurance

  • Gap Analysis

  • Toolkits

4.2. Digital Credentialing

The UNESCO [25] document “Digital Credentialing: Implications for the recognition of learning across borders” was uploaded and shared between the partners. This document was intended to aid the understanding of how 2030 Sustainable Development Goal 4 with emphasis on inclusive, shared education offers opportunities for all may benefit this collaboration. The document outlines issues of shared credentials and how the dividends of the digital economy may be maximized through sustained collaboration between partners.

4.3. Update database of student perceptions

A student survey was established using Google Forms, No IP Tracking was used and all responses were gathered anonymously. Ethical approval for gathering the data was obtained from LYIT. The purpose of the questionnaire survey was:

  • Geographical profile of student sample

  • Gender ratio

  • Experience of on-line assessment

  • Preparedness and barriers to on-line assessment

  • Mathematics Ability

  • Rewards for mathematics

4.4. Online questionnaire for expert group using a proforma document

A pro-forma document containing set questions for each partner was provided via Google Drive and all partners submitted responses to an Excel Spreadsheet to permit analysis. The issues to be addressed were:

A) Overview Data

  1. Country

  2. Name of Institution

  3. Name(s) of programme(s) being used for teaching purposes

  4. Credit Points Per Year (ECTS)

  5. Hours per Credit (ECTS)

  6. Member State name for Credits

B) Curriculum

1. Topics covered in Semester 1 Mathematics

a. Name

b. Length

c. Salient Mathematical Issues

2. Expected Learning Outcomes

a. Overall Mathematics Module

b. Topic level outcomes

3. Intended Learning Outcomes (if applicable)

4. Assessment Types

a. Diagnostic

b. Formative

c. Summative

5. Assessment Methods

a. Online automatic

b. Online manual

c. Off-line

6. Instructional Design

a. Fragmented

b. Holistic

C) Skills/Competencies/Qualifications

1. Student entry qualifications

a. Maximum, Minimum, Average

b. Second chance, Mature entry, Special Needs

2. Student entry requirements

a. Standard Student

b. Non-Standard Student

3. Expected student ICT skills

a. Mathematics ICT skills

b. General ICT skills

4. Expected student Mathematical skills

a. Mathematics Domain Specific skills

b. General Mathematics skills

5. Expected student Literacy skills

a. Language literacy

b. ICT literacy

c. Mathematics literacy

6. Minimum qualifications required for academic staff

a. Professional

b. Teaching

c. Other.

D) Quanta

1. Average number of students per programme

2. Gender ratio per programme

3. Average pass rate per programme

4. Pass mark/grade/percentage

5. Scoring

a. Pass/fail

b. Percentage

c. Grade

E) Quality Assurance

1. National Guidelines

2. Institutional Guidelines

3. Professional Body Accreditation/Guidelines

4. External Moderation

F) Gap Analysis

1. Identification of gaps within current practices

2. Are the gaps known to the target audience?

3. How big (serious) is the gap?

4. What motivation exists to minimise the gap?

5. What barriers exist to addressing the gap

6. Is the gap an existing gap or an emergent gap?

7. How has the gap been identified?

8. How will changes in student behaviours be identified and measured?

G) Tools

1. Off-line tools

2. On-line tools

4.5. Update database of online assessment systems for engineering mathematics

An on-line literature study was conducted into proprietary systems for on-line assessment of mathematics using a Google Scholar search. Test systems were downloaded and tested with a view to reporting on the current state of on-line assessment – both proprietary and those contained within tools such as Virtual Learning Environments like Moodle, Blackboard, Canvas and Desire2 Learn.

4.6. Update database of EU projects engaged in ICT and mathematics

Using the EU project portal as the starting point, a search has been conducted of known projects within the EU involving ICT and Mathematics. To reduce the volume of returns this search is restricted to those involving on-line assessment of mathematics within STEM and higher education.

4.7. Update database of pedagogy in assessment – online engineering mathematics

The database of pedagogy in assessment is a considerable body of literature. Reducing the target area to that of on-line assessment enables a more fruitful search to be conducted. The area of on-line assessment is a growing field as more academics strive to understand the nuances of assessment in mathematics using on-line technologies.

4.8. Student perceptions

The literature available on perceptions of students is sparse when the student body in Universities of Applied Sciences or their equivalent are considered. The majority of literature is available for primary and secondary level students as well as University students. Less research has been conducted in the area of undergraduate engineering mathematics within Universities of Applied Sciences, Institutes of Technology and Polytechnics.

Student perceptions are analyzed using mixed methods of quantitative surveys and group interviews within an integrated convergent design. Data is analyzed using SPSS and Thematic Analysis.

4.9. Expert group consensus

The expert group method uses the expert knowledge of the partners to determine the salient issues, barriers, mis-understandings and rhetoric employed in the design of engineering mathematics programmes. Conducted by means of a gap analysis the outputs will feed into the pedagogical model in IO3.

4.10. Pedagogical considerations for online assessment of engineering mathematics

The outputs from the expert group consensus aligned with those from the mixed methods analysis will form the basis for the overall reporting on the pedagogical model for sound on-line assessment of engineering mathematics.

5. Findings

5.1. Student Surveys

Figure 1. No prior on-line assessment experience

Comparison of students’ responses displayed startling differences between the countries involved when issues relating to Computer Based Testing (CBT) or on-line assessment were explored. Students in Ireland report that 80% of them do not have any exposure to on-line assessment systems prior to entry to third level education. Comparison with Estonia shows that 80% of their students do have exposure to on-line assessment, whilst approximately 40% in Poland, Portugal, Spain and Romania have prior experience.

Figure 2. Perceived preparedness and barriers to CBT

Students reported on their perceptions of preparedness prior to the use of on-line assessment in third level. The consensus is that they feel prepared in all partner organizations however; many still perceive or experience barriers in relation to the on-line assessment.

Figure 3. Perceived maths ability

Students’ perceptions of their mathematics ability before and after entry to third-level education were analyzed. The majority felt their abilities in mathematics was good, with Spain reporting the highest levels of ability. The majority of perceptions of ability remain constant, but noteworthy is that by second semester of first year that Spanish students’ perceptions drop, perhaps due to the difference between the levels of High School and University.

Figure 4. Effort and reward in maths

Analysis of expectancy and reward suggests that the majority of students consider their effort is justly rewarded. Spanish students appear to suggest that the reward for work done is not rewarded. This may be the reason why the perceptions of Spanish students regarding their current mathematics abilities have dropped.

5.2. Partner Shared Curricula

All of the partner universities have 60 ECTS/year programmes at level 6. The credit points for the mathematical topics in the first year of study vary for each partner institution from 10 to 28 ECTS (Table 1).

Table 1. ECTS overview in partner countries

The curriculum is quite diverse among the partners but many some common points exist. Table 2 presents some of the main topics that are present in the curriculum sorted in descending order of frequency.

Tabel 2. Subjects present in the curriculums of the partner universities

The student is expected to have general skills of applying the learned concepts in in engineering and managerial sciences as well as specific skills of applying more concrete concepts as matrices and equations.

There is no diagnostic assessment currently in use in any of the partner universities. The formative assessment is done through observation (Estonia, Ireland), proposed step by step (Portugal), through activity at seminars (Romania) or through tutorials (Spain). Most summative assessments rely on a combination of assignments and a final test.

All partner countries use offline assessment methods and some use manual (Estonia, Spain) or automatic online assessment systems (Estonia, Ireland, Portugal).

None of the partner universities except Spain has a holistic instructional design. Those partners operating with a fragmented design use a prescribed design process with supporting but compartmentalized modules. The guidelines provided by national quality bodies and professional bodies allude to holistic design practices, however such practices are difficult to implement within conservative education environments.

Student entry requirements vary from partner to partner and is in accordance with national regulations. Usually students must pass a graduation test or national exam (baccalaureate). There are no mathematics skills expected on entry for any of the partner universities and some might expect basic ICT skills like Word and Excel and using the internet. All students are expected to have knowledge of the national language in every country and they are not expected or required to know English. Students in all partner universities are expected to have high-school level skills in mathematics. The teaching staff is required to have a master’s degree in Math or similar field in most partner countries, except Romania where a PhD is required for staff hired indeterminately.

Of the study programs analyzed most have on average 20-30 students. There are programs with less students on average, of about 10 students (Geodesy – Estonia, Electrical Engineering – Ireland) but also programs with up to 70 students (Spain).

The gender ratio is of about 80% male to 20% female (Figure 1). The only exception is Portugal that has a gender ratio close to 1 to 1.

Figure 5. Gender by country

*In the case of Portugal the average of three programs was reported for the year 2017-2018

The scoring system is different in each country. Even though none of the partners have a pass/fail scoring system, there is a mixture of using percentage and grade systems. The average passing rate is about 60% on average for all countries with great variations between them. Each country has a different grading system ranging from 40% to 51%.

All countries operate under quality assurance guidelines on a national and institutional level. These guidelines form the basis of all learning objectives and outcomes. Professional body accreditation guidelines also provide input to the design process in Ireland, Portugal and Spain. Professional body accreditation ensures that educational standards are met and harmonized to allow movement within the profession between participating countries.

5.3. Gap Analysis

The purpose of the Gap Analysis [5] is to support, foster and develop the collaborative, shared, innovative pedagogical model for teaching engineering mathematics. The Gap Analysis will guide the development of the pedagogical model based on the responses of the partners to a series of targeted questions. The questions were posted to all partners using a pro-forma spreadsheet on Google Drive.

5.3.1. Identification of gaps within current practices

The primary gap is that automated assessment systems only consider the product of the assessment. The ability to enter a correct answer whilst not demonstrating mastery or knowledge of concepts, applications or procedures may reward a student, however the system is not in a position to honestly report that the student has met the learning outcomes. An assumption of syntactic literacy is made for access to on-line assessment systems and that all students will have equality of access to the system. There is an insufficient spread of online assessment techniques. Reluctance of academics to accept the importance of eAssessment and expectations of digital and coding literacy. The gaps in expected knowledge of concepts, practices and procedures from students from second level education is a constant worry.

5.3.2. Are the gaps known to the target audience?

Target audience consists of students, academics and institutions. The gaps within the current practices are not known to the students. Limited awareness of gaps exists for majority of academics. The higher education institutions are generally aware of gaps in equality of access. Some students are aware of gaps in their knowledge on entering higher education and know that this may affect their performance. Some academics are unwilling to address their gaps in eAssessment design.

5.3.3. How big (serious) is the gap?

The gap in quantity and quality of online assessments is being addressed and is considered minimal. Expectations of digital literacy are moderately serious and can introduce barriers to the learning experience for some students. Automated assessment issues are moderately serious if not attended to by academics. Consideration should be given for partial credit particularly in early transitional stages. Expectations of equality of access are a barrier for some students and may be cultural in nature. Coding literacy is required to provide solutions on some online assessment systems - concentration on coding instead of mathematics literacy may be disadvantageous. Academic transition to eAssessment is slow and problematic to ensure that correct models and practices are employed.

5.3.4. What motivation exists to minimize the gap?

The overall learner experience is paramount to retention on programmes. The experience may also affect psychographic components of the student profile. The reduction of the barriers or gaps may improve and address areas of contention. The learners are motivated for tasks that are doable, they will be motivated toward a course that provides the appropriate level of cognitive challenge but that they perceive as achievable. The relationship between the subject and how they relate to the specialism may motivate the students. Motivation may also come from institutional support.

5.3.5. What barriers exist to addressing the gap?

Good techniques are not available free of charge. Students may not agree to study the "logic of using tests" in addition to the subject. Attitudes of Institution, inflexibility of academics, access to technology. Resistance by academics, lack of technicians, access to technology, recurrent financial restrictions and commitment of the institution. Lack of self-awareness by students and academics of their abilities, knowledge and contribution in online environments.

5.3.6. Is the gap an existing gap or an emergent gap?

The issues currently exist and issues surrounding coding have been exposed through pilot study testing. The known gaps appear to be deepening and widening with time.

5.3.7. How has the gap been identified?

Interviews with academics, staff surveys, conversations with students and academic staff, benchmarking process of Institution, literature review, Focus Group discussion with students, student questionnaire, class observations.

5.3.8. How will changes in student behaviours be identified and measured?

Observation and analysis of online assessments, questionnaires, focus groups, academic interviews.

5.4. Analysis of Proprietary Systems

5.4.1. MAPLE TA

Maple TA (‘Maple T.A. - Online Assessment System for STEM Courses - Maplesoft’, 2019) is a proprietary, licensed, automated assessment tool aimed at mathematics, science and technology type programmes of learning. Maple offers an interface which is more user friendly than STACK, however MAPLE TA suffers from many of the problems outlined later in the STACK review. The system is syntax dependent and not very forgiving, requiring good fine-motor control of the mouse when accessing graphical and image-based questions. The software is browser sensitive particularly in relation to the Apple Safari browser when setting graph-based questions.

5.4.2. NUMBAS

NUMBAS is a free and open-source set of tools developed at Newcastle University (UK) and the focus is on the e-assessment of mathematics. As with STACK and Maple, NUMBAS is SCORM compliant and will operate within a SCORM compliant VLE such as Blackboard or Moodle. The level of complexity offered within NUMBAS is not as extensive as within STACK or Maple, however the levels of technical support required for successful operation is consequently reduced in comparison.

5.4.3. STACK

These systems influenced the design of STACK (System for Teaching and Assessment using a Computer Algebra Kernel) [17], a development of the adaptive Moodle assessment approach. STACK was designed to address many of the shortcomings of assessment by avoiding the need for the assessor to enter details of every possible variation of an answer. Similar systems to STACK are AIM (based on Maple) and CABLE (based on Axiom). STACK is available within Moodle and Ilias providing feedback and monitoring functions of questions. Feedback may be tailored to the questions and level within the question; this type of system is not available within the Blackboard assessment system. The underlying technology is known as Maxima. This system was developed to address issues of plagiarism and to provide repeat opportunities. Maxima also allows students to work together in a vicarious learning environment. Student syntactic accuracy is available through validation prior to final submission of the assessment. To use the system, the students must gain knowledge of STACK syntax. The syntax is aimed at the questioning, but it is argued that this skill is necessary in a modern ICT environment. STACK questions may be designed to search for properties of answers rather than exact answers; students working together may produce different correct answers. Feedback may be designed to be provided based on historical responses to similar type questions.

5.4.4. iSpring

An alternative to stand-alone proprietary software is iSpring. iSpring is not a computer algebra system and does not offer the levels of complexity of STACK, NUMBAS or Maple TA, however it integrates directly with PowerPoint and offers powerful mathematical editing capability. Using an interface that is familiar to those with PowerPoint experience iSpring provides a low learning curve with enhanced interactive capability to generate a variety of output formats such as HTML5 and MP4. The application is not regarded as syntax heavy and should not place a high cognitive load on students when they interact with the interface.

5.4.5 Example of e-Assessment

An example of a multi-part question with discrete elements would be:

Find the equation of the line tangent to the curve x3 - 2x2 + x at the point x = 2.

There are three discrete steps that may be tested in this question:

  1. Differentiate x3 - 2x2 + x with respect to x.

  2. Evaluate the derivative at x = 2.

  3. Find the equation of the line tangent leaving the answer in the form y = mx + c.

1. = 3x2 – 4x + 1

2. at x = 2 = 3(2)2 – 4(2) + 1 = 5

hence the gradient function, m = 5

3. At x = 2, y = 23 – 2(2)2 + 2, y = 2

Using y – y1 = m(x – x1) where x = 2 and y = 2

y – 2 = 5(x – 2)

y – 2 = 5x – 10

y = 5x - 8

Current CAS systems seek the underlined answers to award the marks and allocate feedback as appropriate. To achieve those marks the system “assumes” the process to generate those results is correct. The process is not fully identified or tested within the assessment. This type of assessment breaks the question down into discrete elements but in doing so it provides the student with clues as to the strategy required to solve the problem.

1) Determination of question types

All questions are based on the Computer Algebra System requiring the student to submit a written answer.

2) Cost and complexity

The hidden costs are in terms of the support required to manage CAS within a LMS, the length of time required to train in the development of questions and their subsequent testing, additional time will also be required spent training students in the correct use of the syntax. The underlying system is highly complex and is reflected in the manner by which questions are developed. The syntactic complexity is such that these systems may not be at an appropriate level for the students engaged within this research.

5.4.6. Comparison of Assessment Systems

Each assessment system is designed to provide access to a variety of assessment tools offering different degrees of complexity. NUMBAS and STACK do not involve any financial outlay to access or use them, whereas iSpring and Maple TA require licenses to be purchased – the costing is provided on a cloud or server basis and is dependent on the number of licenses required. Assessment systems are also available within the standard VLEs employed in higher education, e.g., Blackboard, Moodle, Canvas, Brightspace. Table 3 provides a comparative breakdown for each system:

Table 3. Comparison of Assessment Systems

The selection of on-line assessment system is dependent on the pedagogical model employed, availability of technical support, standalone or access to VLE, required expertise with coding and syntax, financial resources, degree of complexity required within the assessment, and whether partial credits are to be awarded.

6. Discussion and Conclusions

Common issues exist for students in each institution. These issues may offer opportunities to explore novel and innovative pedagogies in engineering mathematics if addressed by institutions. Students’ ICT preparedness for Higher Education is taken for granted by many academic designers. The assumption of the digital native is one issue that poses a barrier to learning for students.

Cultural, language, and social barriers exist that need to be accommodated in any engaging pedagogical paradigm. The common language of mathematics is assumed to be necessary to allow for cross-cultural shared programmes, however the common language of mathematics is not sufficient. It is necessary to ensure that local support measures cater for local issues.

Automated assessment only considers the product of assessment and not the process. Smart assessment systems place significant cognitive loads on assessors, assessment designers and students.

Traditional didactic processes are the majority in the current pedagogies offered by the partner institutions and it is expected that these will continue.

The findings provide crucial information regarding priorities for the pedagogical design of the shared program materials. Differences in student expectations and abilities must be accommodated within an engaging paradigm. Academics have different priorities from students in the design and operation of learning and assessment materials. Course materials should give the option to provide access in the local language – sharing across language borders is not straightforward

On-line assessment tools for mathematics exist however, the cognitive load experienced by academics in the setting up of such tools is considerable. Syntax heavy assessment tools place a burden on low to middle mathematics achievers and pedagogical components within on-line assessment tools are not always visible. Equivalence of the human assessor is not yet possible in mathematics.

The partners are aware of the current limitations of mathematics-oriented software to provide an immersive interactive experience for students and academics. Working within the boundaries of mathematics and addressing social, language, cultural and national concerns within a shared, collaborative programme, the partners have decided to focus on student interaction within the learning environment as the foundation for the pedagogical model. The model of on-line mathematical pedagogy will be developed within IO3 using this experience combined with the activities in IO2 to provide a paradigm that may be used by academics in the development of engaging and interactive educational experiences.

References

1. McCawley, P. F. (2009). Methods for Conducting an Educational Needs Assessment, 24. Retrieved from http://cmedbd.com/cmed-admin/upload/reading_materials/articles/M3_TL1_Article.pdf

2. Bigbee, J.L., Rainwater, J., & Butani, L. (2016). Use of a needs assessment in the development of an interprofessional faculty development program. Nurse Educator, Vol 41 (6) pp. 324-327.

3. (2011). Needs Assessment: Tools and Techniques, A guide to Assessing Needs, Chapter 3. Retrieved from http://www.nwlink.com/~donclark/analysis/analysis.html

4. (2011). Guide to conducting an educational needs assessment: beyond the literature review. Retrieved from https://www.janssentherapeutics-grants.com/sites/all/themes/ttg/assets/Needs%20Assessment%20Guide.pdf

5. (2016). Methods of assessing learning needs: Gap analysis and assessing learning needs. Retrieved from http://distribute.cmetoronto.ca.s3.amazonaws.com/QuickTips/How-to-Conduct-a-Gap-Analysis.pdf

6. Jordan, S. (2013). E-Assessment: Past, present and Future, New Directions Vol 9(1), pp 87-106, DOI 10.11120/ndir.2013.00009

7. Gikandi, J.W., Morrow, D. & Davis, N.E., (2011). Online formative assessment in higher education: A review of the literature, Computers & Education, Vol 57, pp. 2333-2351

8. Passmore, T., Brookshaw, L. & Butler, H., (2011). A flexible extensible online testing system for mathematics, Australasian Journal of Educational Technology, Vol 27(6), pp. 896-906

9. Redecker, C. & Johannessen, O., (2013). Changing Assessment – towards a new assessment paradigm using ICT, European Journal of ERducation, Research, Development and Policy, Vol 48(1)

10. Ras, E., Whitelock, D. & Kalz, M., (2015). The promise and potential of e-assessment for learning, In P. Reimann, S. Bull, M. Kickmeier-Rust, R. Vatrapu, & B. Wasson (Eds.), Measuring and Visualizing Learning in the Information-Rich Classroom, Oxford, pp. 21-40

11. Marshalsey, L., & Sclater, M., (2018). Critical perspectives of technology-enhanced learning in relation to specialist Communication Design studio education within the UK and Australia, Research in Comparative and International Education, Vol 13(1), pp.92-116

12. Presseisen, B.Z., & Kzulin, A., (1992). Mediated Learning--The Contributions of Vygotsky and Feuerstein in Theory and Practice, Retrieved 31/01/2019, https://eric.ed.gov/?id=ED347202

13. Black, P., & Wiliam, D., (1998). Assessment and Classroom Learning, Assessment in Education: Principles, Policy & Practice, Vol 5(1), pp.7-74

14. Morgan, C., & Watson, A., (2002). The Interpretative Nature of Teachers' Assessment of Students' Mathematics: Issues for Equity, Journal for Research in Mathematics Education, vol 33(2), pp.78-110

15. Dearden, R.F., (1979). The Assessment of Learning, British Journal of Educational Studies, Vol 27(2), pp. 111-124

16. Robles, M., & Braathen, S., (2002). Online Assessment Techniques, Delta Pi Epsilon Journal, Vol 44(1), pp. 39-50

17. Butcher, P., Hunt, T., & Sangwin, C., (2013). Embedding and enhancing eAssessment in the leading open source VLE, Higher Education Academy.

18. Pachler, N., Daly, C., Mor, Y., & Mellar, H., (2010). Formative e-assessment: practitioner cases, Computers & Education, Vol 54(3), pp.715-721, DOI 10.1016/j.compedu.2009.09.032

19. Boud, D., & Molloy, E., (2013). Rethinking models of feedback for learning: the challenge of design, Assessment & Evaluation in Higher Education, Vol 38(6), pp.698-712, DOI 10.1080/02602938.2012.691462

20. Brown, K., (2017). Myths, rhetoric and opportunities surrounding new teaching technologies: Engineering mathematics education, EDCrunch Conference, Yekaterinburg, Russia

21. Brown, K., & Lally, V., (2017). It isn't adding up: The gap between the perceptions of engineering students and those held by lecturers in the first year of study of engineering, ICERI 2017, https://library.iated.org/view/BROWN2017ITI

22. Brown, K., & Lally, V., (2018). Rhetorical relationships with students: A higher education case study of perceptions of online assessment in mathematics, Research in Comparative and International Education, Vol 13(1), pp.7-26, http://journals.sagepub.com/doi/10.1177/1745499918761938

23. Brown, K., (2017). Virtual Learning Environments in higher education, Internal report, LYIT VLE working group (unpublished)

24. Sangwin, C., (2012). Computer Aided Assessment Of mathematics Using STACK, 12th International Congress on Mathematical Education, South Korea, Proceedings of ICME

25. Chakroun, B., & Keevy, J., (2018). Digital credentialing: implications for the recognition of learning across borders - UNESCO Digital Library, programme document retrieved 8/02/2019: https://unesdoc.unesco.org/ark:/48223/pf0000264428