mathematics on-line learning model: final report

1. Overview

The model of mathematics online learning systems comprises two sub tasks leading to an overall output report: O3/A1 Literature Review, O3/A2 Creation of mathematics online learning model. Underpinning this package of work are the outputs from IO1 and IO2.

The salient outputs from the needs assessment conducted in package IO1 was information relating to: Impact, Approaches, Awareness, Outcomes, Demand, and Credibility. The process involved utilization of direct and indirect methods of assessment and considered all stakeholders in the design and delivery of the programme of learning.

A programme of learning in IO2 was designed and operationalized taking into account the outputs from IO1. A pilot was tested with academics and students and the results from testing subsequently fed into the final design. This report highlights the process and explains the design model employed and adapted to meet the needs of students and academics.

2. introduction

The following report analyses the existing situation regarding undergraduate engineering and business mathematical education in the partner countries: Estonia, Ireland, Poland, Portugal, Romania and Spain. To this end data has been gathered from the partner institutions:

  • TTK University of Applied Sciences (Estonia)

  • Letterkenny Institute of Technology (Ireland)

  • Polytechnika Koszalinska (Poland)

  • Polytechnic Institute of Porto - Porto Accounting and Business School (Portugal)

  • The Technical University of Cluj-Napoca (Romania)

  • Universitat Polytechnica De Catalunya (Spain) (Year 3 of project)

  • University of the Basque Country - UPV/EHU (Spain) (Years 1 and 2 of project)

The study programs that benefit from the mathematics classes taught span a wide range of fields: from architecture, engineering, accounting and marketing. The selection of study programs maximises the coverage of the study and spread of student types in Europe.

The partner universities employ curriculum design approaches of 60 ECTS/year programmes at level 6 in the Bologna Framework in the first year of study. The curriculum design approach is not homogenous and credit points for the mathematical topics in the first year of study vary for each partner institution from 10 to 28 ECTS (Table 1).

Assessment of mathematics within a technology enhanced learning environment, or e-assessment, may be conducted using a variety of assessment methods. E-assessment includes the use of a computer as part of any assessment-related activity [6, p.88] – the definition of computer includes any computer device such as mobile phone, tablet, laptop, desktop computer. The degree and level of sophistication within the e-assessment may be determined by ‘drivers’ as diverse as institutional requirements through to personal choice by academics. E-assessment has developed particularly over the last twenty years to include a high degree of sophistication for mathematical questioning in the form of Computer Algebra Systems (CAS). It is recognised that the inherent embodiment of questioning and feedback is essential within any e-assessment system [2,7,8,9,10].

The opportunities afforded by e-assessment have provided scope for expansion and development of the students’ learning space [11]. The traditional learning space of the fixed bricks-and-mortar classroom has been transformed through the mediation of technology [12]. The transformation of the learning space relocates the student-teacher nexus within the continuum of physical-virtual spaces with an added temporal component. Cognizance of the shift in space must be recognized within a pedagogically sound curriculum with resultant re-imagining and consideration of the paradigm. Redecker and Johannessen [9] recognized this issue as a conflict between two assessment paradigms – computer-based testing versus embedded assessment [9, p.80]. The predicted movement towards personalised learning involving gamification, virtual worlds, simulation and peer assessment in the 4th generation of computer-based assessment, resulted in a call for a paradigm re-alignment in assessment. However, despite a considerable range of assessment formats on offer through 4th generation technologies, the traditional assessment paradigm remains paramount. A conceptual shift is required to re-imagine the model of assessment to ensure it meets the expectations of all stakeholders in the learning process.

2.1. Assessment

Assessment is the process of forming a judgement about a student and assigning an outcome based on the judgement. The outcome of the judgement must be fair within the context of the assessment, it may be made visible to the student in the form of feedback or a score. As an instrument, the assessment may occur at the unit level or in the form of an overall multi-level instrument and depending on the form of assessment the output will be employed for different purposes.

2.1.1. Forms of assessment

The form of the assessment may be determined by the purpose of the assessment. The purpose of the assessment is outlined by [13] as:

  1. Diagnostic

Diagnostic feedback is used to inform a student about past learning and provides a model for the student to prepare for future learning.

  1. Formative

Formative feedback relates to the performance of the student within a particular recent context. It may inform the student about potential learning gaps or deficiencies.

  1. Summative

Summative assessment considers the culmination of a student’s work. The normal output is a score or mark used to denote competence, skill, ability, knowledge when students wish to progress.

  1. Evaluative

Evaluative assessment measures the effectiveness of the teaching or the assessment of the students. This type of assessment may contribute to quality audits.

2.1.2. Teacher's role

The role of the teacher may vary from the most fundamental naïve reader through to critical examiner guided by externally determined criteria [14]. Within the assessment process, the human teacher has ultimate responsibility for all decisions leading to the various outcomes. The meaning of assessment in the context of this study is the establishment of whether a student’s mathematical submission satisfies the properties specified by the teacher, or, if not, to what degree. Morgan and Watson [14] considered issues for equity in assessment and demonstrated how different assessors may arrive at different but equally valid assessment judgements. In doing so, concerns were highlighted that are still considered pertinent. It may not be possible to remove all forms of inequity in assessment however, insight and understanding through critical reflection of assessment practices, policies and procedures, has potential for mitigating the effects on students.

2.1.3. Assessment of mathematics

The processes of structured thought and argument evident within mathematics allows students to solve problems. Many mathematical problems are addressed through applied mathematics and statistics, and a typical expectation would be that the student is able to:

  1. Identify and collate relevant information

  2. Model the information

  3. Apply the correct mathematical techniques and processes

  4. Correctly interpret results and present the results in an appropriate manner.


The action of determining if a student has gained the skills and knowledge to solve a problem adequately, is gained through the claim that the assessor has gained knowledge of the student’s learned capacity [15]. The learned capacity of the student must be evidenced, and a link must exist to tie the claim of knowledge to the reality to which the assessor lays claim. The actions of the assessor are critical because not only is the assessor attempting to discover that which is true but also to do justice to the student. Inferential hazards must be minimized, and a logical priority should be attached to determine exactly what it is that should be assessed. The claim of knowledge that someone has learned something can never be certain, and the claim that the learning gap has been reduced can never be made with complete security. There are three major considerations relevant to appraising an attempt to assess learned capacity: The truth of the knowledge claim; justice to the person assessed; Overall utility of the assessment exercise.

When teachers judge assessments by students, they interpret what they read using their professional judgement. The output of this judgement is an inference of what the students’ have achieved in the learning process and the teacher will indicate the success or otherwise of the activity. A teacher will use their collective resources as well as individual resources and these may include their [14]:

  • Personal knowledge of mathematics and the curriculum

  • Beliefs about the nature of mathematics, and how these relate to assessment

  • Expectations about how mathematics is communicated

  • Experience and impressions of students

  • Expectations of individual students

  • Linguistic skills and cultural background

  • Construction of knowledge about students’ mathematics


Issues may arise that give cause for concern in the assessment process and result in inequity, namely: Inconsistent application of standards; Systematic bias; Poorly designed tasks.

2.1.4. Application of standards

When different assessors assess the same work, it is necessary to apply standards in a consistent manner to remove any inequity. The determination of learned capacity by an individual assessor is affected by the resources that a particular teacher may bring to the assessment. A teacher may have more experience and better linguistic skills than partner teachers and arrive at concluding judgements using different approaches. In mathematics it is possible to arrive at the same conclusion even though the assessors have made different assessment judgements within the context of the assessment. Through shared training and rating, the potential for inequity may be reduced but it is still subject to the vagaries and nature of human judgement.

2.1.5. Systematic bias

Standard training programmes for the rating and assessment of students’ learned capacity exercises address issues such as concept, procedure, and modelling. Assessment where the nature of the activity involves different cultures, races, languages, and socio-economic groupings, may fall foul of inherent disadvantages. A student completing an assessment where the language of the assessment is not the students’ first language may experience inequity and the assessor may not be in a position to take this factor into account. Differences in socio-economic grouping through variable access to resources may result in the failure by some students to participate fully or appear to participate fully, resulting in a lowering of teachers’ expectations. Race and culture may result in a devaluing of the performances of certain student minority groups due to the manner in which an assessment has been conducted. The reliability of the assessments may be beyond reproach; the status of equity may be such that inequity exists due to the systematic manner in which an assessment is judged.

2.1.6. Poorly designed tasks

A pivot on which the assessment operates, is the design of the task that a teacher wishes to conduct to determine if the learning gap has been reduced, and the subsequent students’ learned capacity has increased. The task must be viewed by all in a consistent manner and the design must be in alignment with the knowledge sought by the teacher. If the task is not designed correctly then the determination of learning is not possible. The use of poor language within a question, leading to ambiguity, has potential to result in a failure by students to address the true nature of the assessment. Poor design affects students and those conducting the assessment of the exercise. It may not be possible to compare the students’ mathematical approaches, practices and procedures in a consistent manner; the meaning of the assessment may be misinterpreted. Inequity may arise in situations where poor design causes ambiguity.

2.1.7. On-line or e-assessment

The onus is now on the student. The student must be the active learner –

self-disciplined, motivated, and learning through discovery. [16]

Just like any other educational technology, the role of on-line technology is to facilitate teaching and promote learning. This raises the question as to how the assessor knows that the requirements for teaching and learning are being fulfilled, when instruction is delivered on-line.

E-assessment has developed within the technology enhanced learning locale to a more central position with improved potential for interaction in the learning process [17]. The use of advanced techniques allows the assessment to be broken down into steps and this allows targeted feedback to take place [2]. The potential to embed a variety of assessment techniques generates interest among teachers to enable students to check their levels of understanding across a range of topics. To maximize the efficacy of e-assessment, [18] a set of parameters for engagement in the formative process were developed:

  • Nature, frequency, role and function of feedback

  • Potential for self-regulation

  • Iteration

  • Scope for sharing outputs and ideas with peers

  • Focus on where the student is going

  • Length of the cycle

  • Potential for pedagogical modification

  • Scope for closing the gap

  • Contribution to future learning trajectories

  • Measurable attributes

Feedback for learning, places the student at the centre of the educational process and promotes the notion that the student is his/her own agent for learning [19].

2.3. Learning environments in higher education

A vehicle employed to embed the student within the technology enabled learning environment in the Higher Education domain is the Virtual Learning Environment (VLE) or Learning Management System (LMS); another vehicle is the use of hosted websites. In this project both VLEs and hosted websites are employed by the partners. Such systems may be open source or proprietary and are increasingly being used to host material, deliver assessments, and in some cases automatically grade and provide feedback to the student [2]. To expand the capability of the assessment systems value may be added using tools supplied by externally hosted assessment systems.

Within the technology enabled, or mediated, learning environment it is important to maintain the concept of equity in assessment. Invalid or unfair judgements of students’ performances can have long lasting and far-reaching negative consequences for the students [14]. Teachers’ judgements can affect short-term behaviour towards a student and influence future interpretations of assessment. Teachers’ expectations of students may also contribute towards unfair decisions particularly so within systemic inequities such as access to internet off-campus [20, 21, 22]. Equity in assessment means that all students are given equal opportunities to reveal their achievements and that the instruments of assessment are not biased. The assessment process must be cognizant of cultural, behavioural and social contexts as unfamiliarity may introduce unintended barriers. Critical to this process of assessment, whether within or outside the technology environment, is the interpretation of the students’ performance.

Open-source or proprietary VLEs/LMS’s are increasingly being used to host and deliver on-line assessments. Most incorporate their own versions of assessment and assessment types and will typically include – Multiple Choice Questions (MCQ), Blank Filling, Multiple Answer, Calculated Entry, Formula Entry, Short Sentence, Free Text, Hotspot, and Word List.

Within the Higher Education domain, the dominant systems employed include Moodle, Blackboard, Desire2Learn, and Canvas [23]. Each has their own characteristics, but they all support hosting of documents, communication tools, basic assessment tools, grading tools and reporting tools. It is not within the remit of this report to describe each system in depth, rather the lens will be solely on supporting delivery and assessment components.

2.4. E-assessment of mathematics

According to Hunt, cited in [2], approximately 90% of question types used within VLEs are selected response such as MCQ. Selected response systems overcome issues of data entry where students incorrectly enter an invalid or unrecognized parameter; this has been reported by students as a common barrier to progress [21].

Invalid entry is a particular issue in the assessment of mathematics using on-line systems when symbolic notation is required. The assessor, in using selective response, brings a sense of objectivity to the task of assessment when using selective response questioning, but must be careful that the result cannot be deduced from the options provided [24]. It may be possible for students to guess the answers, or use assessment savvy, or test-wise techniques. A major issue to be considered when using MCQ type assessments is authenticity. Problems in real-life are not solved through MCQs!

Authenticity of the assessment is fundamental to the learning process, combined with the requirement for objectivity the analysis of mathematics by e-assessment should imitate the actions of the human assessor. One avenue employed to address these requirements is the use of Computer Algebra Systems (CAS).

3. Needs assessment aims and objectives

The Aims and Objectives for the Needs assessment:

  • To gather data from partner institutions for the purposes of conducting a needs analysis

  • Report on on-line assessment systems

  • Pedagogical analysis for on-line mathematics in engineering

  • Specification of needs

  • Provide Intellectual Input to the development of a pedagogical model for shared on-line teaching of mathematics

  • Identify barriers to learning in a shared teaching environment

The target audience of the Needs Assessment:

  1. Undergraduate students in engineering mathematics at higher educational institutions.

  2. Academic staff teaching and designing engineering mathematics in tertiary programs.

  3. Research academics in the areas of Technology Enhanced Learning and on-line learning.

  4. Academic staff at second level preparing students for third level study in engineering.

  5. Other educational institutions preparing engineering curricula.


Data was gathered from each partner institution for the purposes of conducting a needs analysis for a common shared on-line shared engineering mathematics pedagogy study leading to a shared 3-ECTS curriculum.

4. Methodology

This report is based on a review of current and available literature, surveys of higher education partners, surveys and interviews of students. Student interviewees were selected on a convenience basis to the researcher within the first-year undergraduate engineering student population. Surveys were issued on-line to students in each partner organization. A review was conducted of proprietary on-line mathematics assessment systems. The overall objectives were (i) to map key shared curriculum areas, (ii) determine the perceptions of students for on-line assessment, (iii) determine the veracity of the rhetoric used by academics in the design of engineering mathematics programmes, (iv) identify gaps in the current pedagogy, and (v) identify areas of convergence or divergence in the design and delivery of the engineering mathematics programmes. The first aim, using objectives (i), (ii), (iii) and (iv), is to generate an understanding of existing practices within, between and across the partners. The second aim, using objective (v), provides the narrative on the pedagogical research, design and modelling of sound engineering mathematics programmes in the first year of study in higher education.

The methodology employed was initiated by informing all partners about the meaning of a Needs Analysis, the Objectives of the Needs Analysis and to provide guidance in Needs Analysis best practice.

4. 1. Guide for educational needs assessment

All partners were given a short presentation on the types of question to be addressed within the analysis, issues discussed were:

General Descriptive Data

Year 1 Curriculum

Expected and Intended Learning Outcomes

Assessment Mechanisms

Instructional Design

Skills/Competencies/Qualifications

Expected Mathematical/ICT/Literacy skills

Teacher Qualifications

Quanta – Number Students/Gender Ratio/Grading/etc

Quality Assurance

Gap Analysis

Toolkits

4.2. Digital credentialing

The UNESCO [25] document “Digital Credentialing: Implications for the recognition of learning across borders” was uploaded and shared between the partners. This document was intended to aid the understanding of how 2030 Sustainable Development Goal 4 with emphasis on inclusive, shared education offers opportunities for all may benefit this collaboration. The document outlines issues of shared credentials and how the dividends of the digital economy may be maximized through sustained collaboration between partners.

4.3. Update database of student perceptions

A student survey was established using Google Forms, No IP Tracking was used and all responses were gathered anonymously. Ethical approval for gathering the data was obtained from LYIT. The purpose of the questionnaire survey was:

  • Geographical profile of student sample

  • Gender ratio

  • Experience of on-line assessment

  • Preparedness and barriers to on-line assessment

  • Mathematics Ability

  • Rewards for mathematics

4.4. Online questionnaire for expert group using a proforma document

A pro-forma document containing set questions for each partner was provided via Google Drive and all partners submitted responses to an Excel Spreadsheet to permit analysis. The issues to be addressed were:

A) Overview Data

  1. Country

  2. Name of Institution

  3. Name(s) of programme(s) being used for teaching purposes

  4. Credit Points Per Year (ECTS)

  5. Hours per Credit (ECTS)

  6. Member State name for Credits

B) Curriculum

1. Topics covered in Semester 1 Mathematics

a. Name

b. Length

c. Salient Mathematical Issues

2. Expected Learning Outcomes

a. Overall Mathematics Module

b. Topic level outcomes

3. Intended Learning Outcomes (if applicable)

4. Assessment Types

a. Diagnostic

b. Formative

c. Summative

5. Assessment Methods

a. Online automatic

b. Online manual

c. Off-line

6. Instructional Design

a. Fragmented

b. Holistic

C) Skills/Competencies/Qualifications

1. Student entry qualifications

a. Maximum, Minimum, Average

b. Second chance, Mature entry, Special Needs

2. Student entry requirements

a. Standard Student

b. Non-Standard Student

3. Expected student ICT skills

a. Mathematics ICT skills

b. General ICT skills

4. Expected student Mathematical skills

a. Mathematics Domain Specific skills

b. General Mathematics skills

5. Expected student Literacy skills

a. Language literacy

b. ICT literacy

c. Mathematics literacy

6. Minimum qualifications required for academic staff

a. Professional

b. Teaching

c. Other.

D) Quanta

1. Average number of students per programme

2. Gender ratio per programme

3. Average pass rate per programme

4. Pass mark/grade/percentage

5. Scoring

a. Pass/fail

b. Percentage

c. Grade

E) Quality Assurance

1. National Guidelines

2. Institutional Guidelines

3. Professional Body Accreditation/Guidelines

4. External Moderation

F) Gap Analysis

1. Identification of gaps within current practices

2. Are the gaps known to the target audience?

3. How big (serious) is the gap?

4. What motivation exists to minimise the gap?

5. What barriers exist to addressing the gap

6. Is the gap an existing gap or an emergent gap?

7. How has the gap been identified?

8. How will changes in student behaviours be identified and measured?

G) Tools

1. Off-line tools

2. On-line tools

4.5. Update database of online assessment systems for engineering mathematics

An on-line literature study was conducted into proprietary systems for on-line assessment of mathematics using a Google Scholar search. Test systems were downloaded and tested with a view to reporting on the current state of on-line assessment – both proprietary and those contained within tools such as Virtual Learning Environments like Moodle, Blackboard, Canvas and Desire2 Learn.

4.6. Update database of EU projects engaged in ICT and mathematics

Using the EU project portal as the starting point, a search has been conducted of known projects within the EU involving ICT and Mathematics. To reduce the volume of returns this search is restricted to those involving on-line assessment of mathematics within STEM and higher education.

4.7. Update database of pedagogy in assessment – online engineering mathematics

The database of pedagogy in assessment is a considerable body of literature. Reducing the target area to that of on-line assessment enables a more fruitful search to be conducted. The area of on-line assessment is a growing field as more academics strive to understand the nuances of assessment in mathematics using on-line technologies.

4.8. Student perceptions

The literature available on perceptions of students is sparse when the student body in Universities of Applied Sciences or their equivalent are considered. The majority of literature is available for primary and secondary level students as well as University students. Less research has been conducted in the area of undergraduate engineering mathematics within Universities of Applied Sciences, Institutes of Technology and Polytechnics.

Student perceptions are analyzed using mixed methods of quantitative surveys and group interviews within an integrated convergent design. Data is analyzed using SPSS and Thematic Analysis.

4.9. Expert group consensus

The expert group method uses the expert knowledge of the partners to determine the salient issues, barriers, mis-understandings and rhetoric employed in the design of engineering mathematics programmes. Conducted by means of a gap analysis the outputs will feed into the pedagogical model in IO3.

4.10. Pedagogical considerations for online assessment of engineering mathematics

The outputs from the expert group consensus aligned with those from the mixed methods analysis will form the basis for the overall reporting on the pedagogical model for sound on-line assessment of engineering mathematics.

5. Findings

5.1. Student Surveys

Figure 1. No prior on-line assessment experience

Comparison of students’ responses displayed startling differences between the countries involved when issues relating to Computer Based Testing (CBT) or on-line assessment were explored. Students in Ireland report that 80% of them do not have any exposure to on-line assessment systems prior to entry to third level education. Comparison with Estonia shows that 80% of their students do have exposure to on-line assessment, whilst approximately 40% in Poland, Portugal, Spain and Romania have prior experience.

Figure 2. Perceived preparedness and barriers to CBT

Students reported on their perceptions of preparedness prior to the use of on-line assessment in third level. The consensus is that they feel prepared in all partner organizations however; many still perceive or experience barriers in relation to the on-line assessment.

Figure 3. Perceived maths ability

Students’ perceptions of their mathematics ability before and after entry to third-level education were analyzed. The majority felt their abilities in mathematics was good, with Spain reporting the highest levels of ability. The majority of perceptions of ability remain constant, but noteworthy is that by second semester of first year that Spanish students’ perceptions drop, perhaps due to the difference between the levels of High School and University.

Figure 4. Effort and reward in maths

Analysis of expectancy and reward suggests that the majority of students consider their effort is justly rewarded. Spanish students appear to suggest that the reward for work done is not rewarded. This may be the reason why the perceptions of Spanish students regarding their current mathematics abilities have dropped.

5.2. Partner Shared Curricula

All of the partner universities have 60 ECTS/year programmes at level 6. The credit points for the mathematical topics in the first year of study vary for each partner institution from 10 to 28 ECTS (Table 1).

Table 1. ECTS overview in partner countries

The curriculum is quite diverse among the partners but many some common points exist. Table 2 presents some of the main topics that are present in the curriculum sorted in descending order of frequency.

Tabel 2. Subjects present in the curriculums of the partner universities

The student is expected to have general skills of applying the learned concepts in in engineering and managerial sciences as well as specific skills of applying more concrete concepts as matrices and equations.

There is no diagnostic assessment currently in use in any of the partner universities. The formative assessment is done through observation (Estonia, Ireland), proposed step by step (Portugal), through activity at seminars (Romania) or through tutorials (Spain). Most summative assessments rely on a combination of assignments and a final test.

All partner countries use offline assessment methods and some use manual (Estonia, Spain) or automatic online assessment systems (Estonia, Ireland, Portugal).

None of the partner universities except Spain has a holistic instructional design. Those partners operating with a fragmented design use a prescribed design process with supporting but compartmentalized modules. The guidelines provided by national quality bodies and professional bodies allude to holistic design practices, however such practices are difficult to implement within conservative education environments.

Student entry requirements vary from partner to partner and is in accordance with national regulations. Usually students must pass a graduation test or national exam (baccalaureate). There are no mathematics skills expected on entry for any of the partner universities and some might expect basic ICT skills like Word and Excel and using the internet. All students are expected to have knowledge of the national language in every country and they are not expected or required to know English. Students in all partner universities are expected to have high-school level skills in mathematics. The teaching staff is required to have a master’s degree in Math or similar field in most partner countries, except Romania where a PhD is required for staff hired indeterminately.

Of the study programs analyzed most have on average 20-30 students. There are programs with less students on average, of about 10 students (Geodesy – Estonia, Electrical Engineering – Ireland) but also programs with up to 70 students (Spain).

The gender ratio is of about 80% male to 20% female (Figure 1). The only exception is Portugal that has a gender ratio close to 1 to 1.

Figure 5. Gender by country

*In the case of Portugal the average of three programs was reported for the year 2017-2018

The scoring system is different in each country. Even though none of the partners have a pass/fail scoring system, there is a mixture of using percentage and grade systems. The average passing rate is about 60% on average for all countries with great variations between them. Each country has a different grading system ranging from 40% to 51%.

All countries operate under quality assurance guidelines on a national and institutional level. These guidelines form the basis of all learning objectives and outcomes. Professional body accreditation guidelines also provide input to the design process in Ireland, Portugal and Spain. Professional body accreditation ensures that educational standards are met and harmonized to allow movement within the profession between participating countries.

5.3. Gap Analysis

The purpose of the Gap Analysis [5] is to support, foster and develop the collaborative, shared, innovative pedagogical model for teaching engineering mathematics. The Gap Analysis will guide the development of the pedagogical model based on the responses of the partners to a series of targeted questions. The questions were posted to all partners using a pro-forma spreadsheet on Google Drive.

5.3.1. Identification of gaps within current practices

The primary gap is that automated assessment systems only consider the product of the assessment. The ability to enter a correct answer whilst not demonstrating mastery or knowledge of concepts, applications or procedures may reward a student, however the system is not in a position to honestly report that the student has met the learning outcomes. An assumption of syntactic literacy is made for access to on-line assessment systems and that all students will have equality of access to the system. There is an insufficient spread of online assessment techniques. Reluctance of academics to accept the importance of eAssessment and expectations of digital and coding literacy. The gaps in expected knowledge of concepts, practices and procedures from students from second level education is a constant worry.

5.3.2. Are the gaps known to the target audience?

Target audience consists of students, academics and institutions. The gaps within the current practices are not known to the students. Limited awareness of gaps exists for majority of academics. The higher education institutions are generally aware of gaps in equality of access. Some students are aware of gaps in their knowledge on entering higher education and know that this may affect their performance. Some academics are unwilling to address their gaps in eAssessment design.

5.3.3. How big (serious) is the gap?

The gap in quantity and quality of online assessments is being addressed and is considered minimal. Expectations of digital literacy are moderately serious and can introduce barriers to the learning experience for some students. Automated assessment issues are moderately serious if not attended to by academics. Consideration should be given for partial credit particularly in early transitional stages. Expectations of equality of access are a barrier for some students and may be cultural in nature. Coding literacy is required to provide solutions on some online assessment systems - concentration on coding instead of mathematics literacy may be disadvantageous. Academic transition to eAssessment is slow and problematic to ensure that correct models and practices are employed.

5.3.4. What motivation exists to minimize the gap?

The overall learner experience is paramount to retention on programmes. The experience may also affect psychographic components of the student profile. The reduction of the barriers or gaps may improve and address areas of contention. The learners are motivated for tasks that are doable, they will be motivated toward a course that provides the appropriate level of cognitive challenge but that they perceive as achievable. The relationship between the subject and how they relate to the specialism may motivate the students. Motivation may also come from institutional support.

5.3.5. What barriers exist to addressing the gap?

Good techniques are not available free of charge. Students may not agree to study the "logic of using tests" in addition to the subject. Attitudes of Institution, inflexibility of academics, access to technology. Resistance by academics, lack of technicians, access to technology, recurrent financial restrictions and commitment of the institution. Lack of self-awareness by students and academics of their abilities, knowledge and contribution in online environments.

5.3.6. Is the gap an existing gap or an emergent gap?

The issues currently exist and issues surrounding coding have been exposed through pilot study testing. The known gaps appear to be deepening and widening with time.

5.3.7. How has the gap been identified?

Interviews with academics, staff surveys, conversations with students and academic staff, benchmarking process of Institution, literature review, Focus Group discussion with students, student questionnaire, class observations.

5.3.8. How will changes in student behaviours be identified and measured?

Observation and analysis of online assessments, questionnaires, focus groups, academic interviews.

5.4. Analysis of Proprietary Systems

5.4.1. MAPLE TA

Maple TA (‘Maple T.A. - Online Assessment System for STEM Courses - Maplesoft’, 2019) is a proprietary, licensed, automated assessment tool aimed at mathematics, science and technology type programmes of learning. Maple offers an interface which is more user friendly than STACK, however MAPLE TA suffers from many of the problems outlined later in the STACK review. The system is syntax dependent and not very forgiving, requiring good fine-motor control of the mouse when accessing graphical and image-based questions. The software is browser sensitive particularly in relation to the Apple Safari browser when setting graph-based questions.

5.4.2. NUMBAS

NUMBAS is a free and open-source set of tools developed at Newcastle University (UK) and the focus is on the e-assessment of mathematics. As with STACK and Maple, NUMBAS is SCORM compliant and will operate within a SCORM compliant VLE such as Blackboard or Moodle. The level of complexity offered within NUMBAS is not as extensive as within STACK or Maple, however the levels of technical support required for successful operation is consequently reduced in comparison.

5.4.3. STACK

These systems influenced the design of STACK (System for Teaching and Assessment using a Computer Algebra Kernel) [17], a development of the adaptive Moodle assessment approach. STACK was designed to address many of the shortcomings of assessment by avoiding the need for the assessor to enter details of every possible variation of an answer. Similar systems to STACK are AIM (based on Maple) and CABLE (based on Axiom). STACK is available within Moodle and Ilias providing feedback and monitoring functions of questions. Feedback may be tailored to the questions and level within the question; this type of system is not available within the Blackboard assessment system. The underlying technology is known as Maxima. This system was developed to address issues of plagiarism and to provide repeat opportunities. Maxima also allows students to work together in a vicarious learning environment. Student syntactic accuracy is available through validation prior to final submission of the assessment. To use the system, the students must gain knowledge of STACK syntax. The syntax is aimed at the questioning, but it is argued that this skill is necessary in a modern ICT environment. STACK questions may be designed to search for properties of answers rather than exact answers; students working together may produce different correct answers. Feedback may be designed to be provided based on historical responses to similar type questions.

5.4.4. iSpring

An alternative to stand-alone proprietary software is iSpring. iSpring is not a computer algebra system and does not offer the levels of complexity of STACK, NUMBAS or Maple TA, however it integrates directly with PowerPoint and offers powerful mathematical editing capability. Using an interface that is familiar to those with PowerPoint experience iSpring provides a low learning curve with enhanced interactive capability to generate a variety of output formats such as HTML5 and MP4. The application is not regarded as syntax heavy and should not place a high cognitive load on students when they interact with the interface.

5.4.5 Example of e-Assessment

An example of a multi-part question with discrete elements would be:

Find the equation of the line tangent to the curve x3 - 2x2 + x at the point x = 2.

There are three discrete steps that may be tested in this question:

  1. Differentiate x3 - 2x2 + x with respect to x.

  2. Evaluate the derivative at x = 2.

  3. Find the equation of the line tangent leaving the answer in the form y = mx + c.

1. = 3x2 – 4x + 1

2. at x = 2 = 3(2)2 – 4(2) + 1 = 5

hence the gradient function, m = 5

3. At x = 2, y = 23 – 2(2)2 + 2, y = 2

Using y – y1 = m(x – x1) where x = 2 and y = 2

y – 2 = 5(x – 2)

y – 2 = 5x – 10

y = 5x - 8

Current CAS systems seek the underlined answers to award the marks and allocate feedback as appropriate. To achieve those marks the system “assumes” the process to generate those results is correct. The process is not fully identified or tested within the assessment. This type of assessment breaks the question down into discrete elements but in doing so it provides the student with clues as to the strategy required to solve the problem.

1) Determination of question types

All questions are based on the Computer Algebra System requiring the student to submit a written answer.

2) Cost and complexity

The hidden costs are in terms of the support required to manage CAS within a LMS, the length of time required to train in the development of questions and their subsequent testing, additional time will also be required spent training students in the correct use of the syntax. The underlying system is highly complex and is reflected in the manner by which questions are developed. The syntactic complexity is such that these systems may not be at an appropriate level for the students engaged within this research.

5.4.6. Comparison of Assessment Systems

Each assessment system is designed to provide access to a variety of assessment tools offering different degrees of complexity. NUMBAS and STACK do not involve any financial outlay to access or use them, whereas iSpring and Maple TA require licenses to be purchased – the costing is provided on a cloud or server basis and is dependent on the number of licenses required. Assessment systems are also available within the standard VLEs employed in higher education, e.g., Blackboard, Moodle, Canvas, Brightspace. Table 3 provides a comparative breakdown for each system:

Table 3. Comparison of Assessment Systems

The selection of on-line assessment system is dependent on the pedagogical model employed, availability of technical support, standalone or access to VLE, required expertise with coding and syntax, financial resources, degree of complexity required within the assessment, and whether partial credits are to be awarded.

6. Salient design considerations

6.1. Replication of teacher presence to support student persistence

What is student persistence? According to Hart [26] there is no agreed terminology within the literature in relation to addressing persistence, attrition and success for online learners. There are various interpretations; some definitions depend on retention and others resilience.

Hart notes that the use of the term persistence relating to post primary education formed in the 1980’s, and basically meant the opposite of attrition.

In the traditional face to face classroom environment persistence is understood to mean a notion of return to study after a specified period of time, i.e., a semester/year. Online programs have shown that persistence is more varied and involves a complex set of factors more than simply related to knowledge. As a starting point Hart, provides a set of basic definitions, Figure 1, to distinguish between: persistence and attrition, and a persister and non-persister.

Recent studies have attempted to reshape and categorise the substance of “persistence factors”.

The first study looks at persistence from post graduate perspective and the factors that influence a students’ ability to remain on a fully online program of study. This study relied upon reflective student videos produced at the end of the program. The second study surveys a mixed cohort of undergraduate students and uses statistical methods to analysis the factors that influence student persistence. The second study is useful in that it uses a statistically analyzed survey methodology. This model could be adapted to survey the persistence factors regarding full online study.

In Yang [27] “Persistence factors revealed: students’ reflections on completing a fully online program”, study into student persistence they broke persistence into 2 main factors:

  1. Program factors: quality, perceived learning, relevance of the program to students’ aims and institutional support; consisting of support for technical difficulties, access to library facilities. In order to improve the quality of a program frequent program evaluation is recommended, and that lecturers and instructors are familiar (cognizant) with the latest online literature and best practice in relation to student persistence.


  1. Personal factors such as: satisfaction with online learning, sense of belonging to an online learning community, motivation, peer support, family support time management skills and increased communication with the program instructor(s).

The study group consisted of 52 individuals selected from 500 students from 2010 to 2014 who were enrolled on a Masters in Educational Technology MET. The authors analysed each of the students’ reflective videos produced for an end of program eportfolio. The noticeable and prominent points from each video were transcribed and analysed to identify factors which impacted students’ persistence. These factors were coded and classified into 2 main classes with subcategories, Table 1.

  1. Individual Attributes: and

  2. Program Attributes.

From analysis of their results, Yang et al [27] found the following:

  • Response of >= 90% was the largest to the program attributes “relevancy of program to individual or professional needs”, “satisfaction with courses, program, and learning outcomes”, held more significance than the individual attributes.

  • Personal attribute factors did not occur as often as program attributes, however more than two thirds of respondents indicated that that “sense of accomplishment”, “mastery of specific skills”, “perceived utility of learning”, and “meeting career goals” were key to the students completing their online course.

  • Institutional support was important for students’ persistence, particularly: interaction between online administration staff and students. The early creation of good relations between online instructors and students, building support networks for students to resolve personal/professional issues to avoid dropouts are suggested as necessary.

  • Personal attribute,” amount of time and effort in program” and program attribute,

“program tied to job promotion or additional responsibility”, appear to be approximately equal as influencing learner persistence on online programs.

The most significant factors to improve retention on online programs were:

  • to relate coursework and make coursework relevant to the students’ professional practice,

  • assist students in developing specific skills,

  • assist students in appreciating the value of their learning,

  • improving student satisfaction with courses, program, and learning outcomes.

All these factors can inform program administrators and designers as were to place emphasis on weak areas. Continual monitoring of program material and refinement is also recommended on a regular basis. Early identification of students who may be at risk of non-completion based on the above outcomes is essential to bolster persistence. In line with this it is noted [26] that if students do not possess a minimum quantity of persistence factors the student is likely to withdraw from the online program.

In another study “A Measure of College Student Persistence in the Sciences PITS”, Hanauer et al [28] focused on factors that determine whether undergraduate science students will persist in their course and progress into a career as a research scientist. They note that STEM undergraduates fail to persist not because they lack talent but because the way courses are taught.

The development of rich, authentic, and engaging lab research experiences, course-based research experiences, tend to greatly enhance students’ persistence [28]. Their proposed model builds upon the synthesis of earlier developments and they state that at an elementary level that retention can improved by improved use of course-based research experiences. Student surveys revealed thirty-six rating scales divided into six indicator categories:

  1. Ownership – content,

  2. Ownership – emotion,

  3. Science self-efficacy,

  4. Science identity,

  5. Scientific community, and

  6. Networking.

In relation to the work carried out in EngiMath, consistent themes of colour codes, visual clues, fonts and navigation structures were developed. The project is designed as an aid to maths learning. It is not intended to be used as a sole source of course material or as a supplement to aid students who wish to study independently and as a revision aid. Several persistence factors identified in the current literature i.e. family support, lie outside of the scope of the EngiMath project. However, the EngiMath project strives to ensure that the persistence factors which do overlap with the project are fulfilled through the consistent and efficient design of the content, implementation of the course material across the project partners and resulting feedback from the target learning community. In order to evaluate the effectiveness of the EngiMath project surveys were conducted to take account of the project’s online nature.

6.2. Learning resources

Background

Technology enhanced learning environments are especially designed social and information spaces where formal, non-formal and informal learners [40] engage with learning materials, learning artefacts (e.g., learning tools / applications), co-learners, teachers, lecturers, instructors, trainers, experts, etc [43]. The appearance of information and communication technologies for learning has moved the focus of learning design from just the learning materials and their sequencing, to the learning environment as a whole. The result is that design is no longer just concerned with content, but with comprehensive learning environments, be they on-line, in a classroom or blended situations. Moreover, the role of different actors in these environments, distinct types of interaction, or learning behaviour, can be the target of the learning design. Therefore, there is a move [43] from traditional instructional design and authoring tools to support instructors in instructional design of instruction, to a wider understanding of design as learning project and venture for the learning experiences and environment.

Some European Approaches

In the Netherlands, the tradition has been more of a focus on “learning” than on “instruction” and shifting from instructional design to learning design. Kirschner & van Merrienboer [43] have expanded the concept of Four Components Instructional Design (4C/ID), from Merriënboe [35], to a new approach of the instruction and instructional design presenting the concept of Ten Steps to Complex Learning, to designing learning environments that breaks with the old-style compartmentalisation and fragmentation of traditional design approaches (see Fig. 1).

This approach will typically be used for developing substantial learning or training programs ranging

in length from several weeks to several years or that entail a substantial part of a curriculum for the development of competencies or complex skills, both cognitive and psychomotor.

Practical applications of the Four Components Instructional Design model can be found around the world, with quite a lot of books and articles on the model translated in several languages, including Chinese, Dutch, German, Korean, Portuguese, and Spanish.

In Denmark, Norway and Sweden there is a culture and tradition of design perspectives (theoretical as well as political and cultural) and a focus on designs for learning, in particular interactive and collaborative ones. Scandinavian education [38], likewise, has developed an extensive and democratic approach to learning, with a focus on aspects such as collaboration, creativity, inclusion and problem solving, as well as on learner participation and responsibility. There is also recognised a tight relationship between design and use, where one is always designing for a future use situation [29]. Therefore, [42], this raises implications for the design, implementation, use, and evaluation of technology enhanced learning environments, whether a single artefact or an on-line, classroom, or a blended learning space. In particular, this implies that when we look at a technology rich learning environment we need to look at activity from both a design and use perspective, as exemplified in Fig. 2.

It seems pertinent to observe an example presented by Wasson [42], that describes the design of a learning scenario, VisArt, where students at three institutions, with different backgrounds (e.g., teacher education, psychology, computer science) collaborated through a groupware learning environment to collaborative design a learning artefact. Despite its “antiquity” Figure 3 shows the institutional, pedagogical, and technological design aspects that were taken into consideration, that seems quite relevant and up to date features to be mentioned.

More recent work has focused on Teacher-Inquiry into Student Learning (TISL) and the development of the TISL Heart (see Fig. 4), a theory-practice model and method of teacher inquiry

into student learning, which has a particular emphasis on the use of student learning, activity and assessment data for professional development and better student learning [31].

Motivated by the demand for new teacher-training models, that are based on the twenty-first-century teaching professional who designs learning environments that offer technology for better learning and who continuously learns from their teaching [44], the TISL Heart, was the result of iterative development with teachers of a theory-practice model to support teacher inquiry into student learning, which places student data in the centre of the inquiry process.

Several tools to support Learning Design have also been developed all around Europe. These tools can be categorised as “reflection tools and pedagogical planners, authoring and sharing tools, repositories, and delivery tools” [37].

Some tools, that implement the mentioned Learning Design approaches, address new ideas such as the Spanish tool Web Collage [41] which provides a graphical interface to help design the use of collaboration patterns/techniques such as the Jigsaw or the Pyramid, or the University of Oxford LD tool Phoebe (http://www.phoebe.ox.ac.uk) that provides inspiration and practical support to those engaging in Learning Design, and the Greek developed the tool CADMOS [32] that supports the development of a conceptual model to describe learning activities and corresponding learning resources/services partnered with a flow model that describes the orchestration of the learning activities (See Fig.5).

Research on the relationship between learning design and learning analytics has also been a focus in European research in recent years. For example, in their research at the Open University UK (OU), Toetenel and Rienties combine learning design and learning analytics where learning design provides context to empirical data about OU courses enabling the learning analytics to give insight into learning design decisions [39]. Another study [34] reveals that the use of appropriate learning analytics methods (See Fig. 6) and techniques, seems to be helpful to analyse what particular learning activities or tools were practically used by students in Moodle, and to what extent. Considering the importance of the student engagement and the benefits of continuous assessment in higher education, as well as the impact of information and communications technology (ICT) on educational processes, it is important to integrate technology into continuous assessment practices. Since student engagement is connected to the quality of the student experience, increasing it is one way of enhancing quality in a higher education institution. They showed that the use of several educational resources and a low-stakes continuous weekly e-assessment in Moodle had a positive influence and impact on student engagement, learning outcomes and competence development.

Another relevant example is a workshop on Learning Design (LD), Teacher Inquiry, and Learning Analytics (LA) that was held at the 2013 Alpine Rendez-Vous in Villard-de-Lans, France [30] where researchers investigated the relationship between these three topics, resulting in the proposal of a model for Teacher-led Design Inquiry of Learning, an integrated model of teacher inquiry into student learning, learning design, and learning analytics, which aims to capture the essence of the synergy of the three fields and to support the development of a new model of educational practice, as illustrated in Fig. 7.

All the literature review about the design of learning environments, summarized in the few presented proposals and examples guided and supported the work here developed in EngiMath. Chromatic features, suitable sectioned content presentation, appropriate compromise between fundaments construction and practical aspects have been tested and retested based through all the stages of the systematic instructional design model ADDIE [36].

6.3. Assessment

Assessment is a core issue for all involved in the educational process. Educational assessment can have different functions, involving different types and methods. Teachers have used these methods to ascertain whether students have properly gained knowledge of any given material. Traditionally, the typical way that teachers and students have been informed of the learning that has taken place was through summative assessment, a test that is given at the end of a chapter or a unit to measure what students understand and know from the instruction that has taken place. In 1967 Michael Scriven coined the term formative assessment [77] wrote about an evaluation process for the goal of improvement, and called this process formative. From this time on, the term formative assessment has been used in educational settings as a way to inform teachers on student learning while the learning is taking place. Formative assessment is also called assessment for learning, and summative assessment, also entitled as an assessment of learning.

Assessment allows both instructor and student to monitor progress towards achieving learning objectives and can be approached in a variety of ways. Researchers [51,76, 69] state that formative assessment helps to reach the highest level of academic achievement.

6.3.1. Summative assessment (SA) techniques and strategies

Summative assessment (SA) is “information collected after instruction and is used to summarize how students have performed and to determine grades” [47], and evaluates student learning, knowledge, proficiency, or success after an instructional period, as a unit, course, or program.

Because summative assessments are usually higher-stakes than formative assessments, it is especially important to ensure that the assessment aligns with the goals and expected outcomes of the instruction. There is a lot of high-quality research into the summative assessment of learning. Iannone and Jones [68] give an overview of the validity and reliability of the different scenarios of summative assessment of mathematical learning. There are many reviews (e.g. [82, 61, 45, 70]) that can be used to guide the planning, design, and deployment of exams, and monitor students’ performance. These papers, in addition to reviewing the criteria for quality assessment, examine best practices for aligning assessment with learning outcomes and compare general assessment methods.

6.3.2. Formative assessment (FA) techniques and strategies

In recent decades, researchers have stressed the need to develop assessment practices that not only measure what students have learned but also enhance learning. This has led to a shift in assessment practice from final assessment as a core activity to an emphasis on formative assessments. Formative assessment (FA) “is the process of seeking and interpreting evidence for use by learners and their teachers to decide where the learners are in their learning, where they need to go and how best to get there” [48]. Formative assessment refers to tools that identify misconceptions, struggles, and learning gaps along the way and assess how to close those gaps. It includes effective tools for helping to shape learning, and can even bolster students’ abilities to take ownership of their learning when they understand that the goal is to improve learning, not apply final marks [81]. It can include students assessing themselves, peers, or even the instructor, through writing, quizzes, conversation, and more [80]. There are variety of different formative assessment techniques described in studies [46, 63]. Nicol & Macfarlane-Dick [74] presented seven principles that can guide instructor strategies. In the book “Checking for understanding: formative assessment techniques for your classroom” [62], common formative assessment techniques in the classroom are explored as well as the importance of such techniques for student learning, describing tips and tools for supporting implementation of formative assessment techniques that work in different subject area and grade level, for individual students or across multiple classrooms. In addition, a formative assessment system is proposed here, consisting of three parts: learning objectives, feedback for students, and planning students’ learning based on identified weaknesses or errors. Ideally, formative assessment strategies improve teaching and learning simultaneously. Instructors can help students grow as learners by actively encouraging them to self-assess their skills and knowledge retention, and by giving clear instructions and feedback.

According to Shute [78], formative feedback should be non-evaluative, supportive, timely, and specific. Shute has shown several variables are necessary to interact to ensure formative feedback’s success at promoting learning e.g. individual characteristics of the learner and aspects of the task.

Research provides evidence that effective feedback from formative assessment improves student achievement [56, 83, 85]. This allows students to change their academic performance to better achieve their learning goals. In contrast, feedback provided primarily after final grades is often too late for learners, which learners find meaningless [67]. Formative feedback should come when students are still aware of the learning outcomes and have time to act on the information. This could include returning a test or assignment the next day or immediately verbally responding to student errors or misconceptions.

Several studies [54, 49, 58, 59, 75] show that even summative assessment can be used as formative assessment, allowing students to repeat or revise their assignments and tests.

The rationale for formative assessment, combining a diverse set of techniques, within the framework of pedagogical theories, is proposed in Black & Wiliam [52]. This work also identifies potentially fruitful directions for further developing new ways to help teachers make more effective use of formative practices. Using these practices, teachers can identify the strengths of the students and eliminate weaknesses to achieve a higher level of student learning. While research suggests that general practices associated with formative assessment can facilitate learning, to maximize the benefits of formative assessment, new developments should focus on conceptualizing well-defined process and methodological approaches as part of an overarching framework in which all components work together to facilitate learning [50].

6.3.3. Formative-summative assessment (FSA) techniques and strategies

Both forms of assessment can vary across several dimensions [81]. For some time, formative assessment with an emphasis on student feedback has been promoted as a better practice than traditional summative assessment. The practice of summative assessment has been widely criticized as being excluded from the learning process. Recently, discussions have reoriented themselves to possible additional characteristics of the formative and summative purposes of assessment.

The combination of formative and summative assessment was dubbed by Wininger [84] as a formative-summative assessment (FSA). The FSA method has resulted in improved student knowledge. Glazer [64] argues that a combination of the two types of assessment is necessary so instructors can provide formative assessment for learning and summative assessment for assuring that the formative assessment is done appropriately. Some examples from the ASSIST-ME project carried out in 2013-2016 illustrate not only the diversity of assessment approaches and the relationship between formative and final use of assessment but also methods of how formative and summative assessment methods can be used together to ensure effective learning.

In the last decade, the idea of combining formative and summative assessments has grown [72]. Studies [86, 60, 66, 55] emphasize that the two types of assessment are complementary and summative assessment can be used to great effect in conjunction and alignment with formative assessment, and instructors can consider a variety of ways to combine these approaches.

6.4. Activities

The role of critical inquiry in teaching and learning mathematics [87] proposes an approach to teaching as learning in practice, in which all participants of co-learning (students, teachers and educators) are learners. It encourages them to look critically at their own practices and to modify them through their own learning in practice. One of the ways to achieve this goal is the joint participation of teachers and students in the creation, evaluation and improvement of materials for e-learning, such as those created as a result of the EngiMath project. Teachers and educators developing these types of courses learn to a large extent from their involvement in these projects by addressing emerging problems. They become learner practitioners in inquiry processes that lead to the solution of important teaching problems. On the other hand, students, through greater involvement in the learning process, will have the opportunity to critically look at their own practices and modify them through their own learning in practice. Students taking our course, primarily when working with practical and assessment material, have to solve many problems and tasks on their own. Problem solving is a key component of critical adaptation in the learning process.

The focus is on mathematics teaching and learning through inquiry-based practice [87, 89]. While [87] concentrates on the teaching and learning of mathematics at the school level, [88] develops this issue in relation to the university level. The extension to university mathematics brings with it theories and methodologies that underpin research at school level, but also goes beyond education level to apply to higher-level practice. In both cases, inquiry-based practice refers to all aspects of teaching and learning practice in which research inquiry is a developmental component. As learning in practice, in which all participants of co-learning (students, teachers and educators) are learners, so that they can look critically at their own activities and modify them through reflection and self-regulation of their own learning in practice. One of the ways to achieve this goal is the joint participation of teachers and students in the creation, evaluation and improvement of materials for e-learning, such as those created as a result of the EngiMath project. Teachers and educators developing these types of courses learn to a large extent from their involvement in these projects by addressing emerging problems. They become learner practitioners in inquiry processes that lead to the solution of important teaching problems. On the other hand, students, through greater involvement in the learning process, will have the opportunity to critically look at their own practices and modify them to achieve better results. Students taking our course, primarily when working with practical and assessment material, will have to solve many problems and tasks on their own. As stated [87, 88], problem solving, and more generally inquiry-based practice is a key component of critical adaptation in learning. It stimulates thinking, increases students' interest in the subject and appreciation of the role of mathematics in life. It also increases motivation to learn mathematics and their success rate.

An important form of student activity while working with the course created as part of the EngiMath project was solving assessment tests. As it is known, properly selected methods of assessing the achievement of the assumed learning outcomes are an extremely important element of the teaching process. Therefore, they should also be an integral part of any e-learning course [89-94]. The authors emphasize that the effectiveness and quality of e-learning depends on the criteria used and the choice of methods and approaches to assessment. In order for this assessment to be effective, it is necessary to determine what the student should know, be able to do, understand and what skills are expected after completing the training. On the basis of such an analysis, it is necessary (as is the case with our course [90]) to create a database of appropriately selected assessment tasks. As emphasized [90, 91, 94], it is very important to include feedback for students when creating such a database of tasks. Such individual feedback is conducive to strengthening learning, and the lack of focused feedback makes students give up faster thus affecting resilience and persistence. This approach is also in line with the concept of "Learning-oriented assessment" described in [93]. Additionally, as was the case with working with theoretical and practical materials, participants of the EngiMath course could also submit their comments on the assessment tests. Therefore, and in this form of activity, the student is no longer just a passive recipient, but an active participant in the online learning environment as a content critic and its co-creator. Consequently, the learner plays a much more active role in the actual teaching / learning process. As emphasized by the authors [94], this learning model helps to support the student's critical thinking and qualitative abilities.

6.5. Course management and quality assurance

E-learning has become widely used at all levels of education, especially in the context of the COVID outbreak. E-learning and blended learning (b-learning) have been mostly used in higher education [95] and may be an alternative to traditional education. As both e-learning and b-learning are increasing in popularity and use, it is important to correctly evaluate their quality. Although there is no consensus on the effectiveness of design and pedagogical approaches [96] there are several standards looking at quality in education. Peres et all [95] identified six frameworks that are representative of international initiatives. Table 1 outlines the common themes of these frameworks grouped into five main categories: institutional aspects, program and course design, media design, technology and evaluation and review.

Institutional aspects look at the organization’s culture and global elements that affect the learning process [95]. These include education and technology research that focus on what the institution should do to support teachers in implementing e/b-learning. Partnerships with other learning institutions should be created and should adhere to national and international legal requirements. Course and curriculum design should be handled by teams and have their work peer reviewed. Learning outcomes should be agreed upon and communicated to learners. Students should have access to information to be able to make decisions in a well-informed manner.

The course and program should be designed keeping in mind the learning methods and objectives but also assessment, curriculum, and other aspects. The learning methods should consider experience levels and professional context, should allow for personalization of the learning path and encourage self-directed learning [97]. Learning objectives should be clearly stated, measurable and aligned with the course level and expected outcomes. Assessment should be formative and summative and should be appropriate to the curriculum design [95]. Expectations should be clearly communicated, and students should know what they should do and how it will be measured. The curriculum should be designed in such a manner that it contributes to the stated outcomes [95]. The content should be organized in a logical manner with increasing complexity and modules should build on each other [95]. As distance learning implies greater motivation from the student’s part, it is important to determine the student’s learning style and try and motivate them to actively participate in the learning process [95]. The workload for each module should be realistic and should provide opportunities for interaction between students [95]. The learning experience can increase the relevance to the student by being flexible and ensuring contextualization [98,99]. The learning process should be monitored by the teacher [99] and assistance should be provided where necessary. Teachers should also act as tutors and use different means of interacting with students [98]. Materials and resources should be actual, present a variety of perspectives [100] should be presented in a clear manner and be adapted to the learner’s level.

When designing the media, accessibility should be considered. It should contain audio and video alternatives [95] and provide guidance on usage. The course should be presented in a clean, attractive format, with minimum distractions. This can be achieved by considering fonts, text and presentation [98]. The course should be easy to navigate and well-organized, aesthetically pleasing and provide feedback on their progress [95]. Materials should be printable, neutral regarding issues of sex, ethnicity, age etc. [97], easy to download and contain media that comply with copyright laws [98].

The technical infrastructure should be chosen appropriately and support institutional objectives [95]. Learning tools (like Learning Management Systems – LMS) should be in accordance with the technical infrastructure, adapted to both academic staff and learners. Propper security measures should be implemented to ensure the integrity and validity of information, allowing for system recovery in case of a failure [98]. Technical assistance should be available for both staff and students.

The effectiveness of the system should be periodically assessed through feedback procedures [99]. The system should be continuously improved and should allow the implementation of improvement recommendations.

According to - Idemudia, E. C., Adeola, O., & Achebo, N. the effectiveness of an online system depends on [101]:

  1. 24/7 access with support

  2. robust technology for easy delivery

  3. motivation and self-regulation

  4. perceived usefulness of online education

  5. constructive-informal-collaborative-learning

  6. the learning context

  7. learning tools

  8. perceived ease of use of tools for online education

  9. measurability and feedback mechanisms

  10. perceived familiarity of online education.

Uninterrupted access to learning resources has changed from being a convenience to an expectation from learners. Since the proliferation of the internet continuous access to resources from different parts of the world has been possible allowing students to adapt their learning schedule to their busy lives [101]. Adding the three support types identified by Lee et al. [8], instructional, peer and systemic, student satisfaction increases, enrolment attrition decreases and it helps students bond by approximating a face-to-face situation [101].

Technology has increased the speed at which we access and distribute information through faster networks and mobility, it allowed teachers to develop more complex courses faster and added a virtual social layer through social networks [101]. By creating and delivering a well-built course that is easily accessible to learners and that allows them to interact with both the material and each other, learning efficacy is increased.

Nevertheless, online courses shift the responsibility from the teacher to the learner. The student must demonstrate self-efficacy, namely “the ability to effectively manage themselves, to perform tasks, and to achieve defined goals” [101,103, 104]. Students must have intrinsic motivation to go over the material provided and to overcome challenges that they might face during their learning. Shen et al. found that student learning success is dependent on three types of self-efficacy: technological, in learning and in social interaction [105]. Higher technological self-efficacy has been linked to higher academic achievement [106] and increases student satisfaction with their online learning.

A successful online learning system needs to be perceived as useful. Students must feel that by using it they improve either their academic or job performance or it adds value or comfort to their learning process [101].

For the system to be easily accepted by learners, they must perceive its usage as being free of effort. The course must be easy to navigate and use and it should require less mental and physical effort from the part of the learner [101].

Ease of use is also aided by familiarity with the system. As the student is more familiar with the interface and symbols used, the usage and flow of information, it is more likely that they will be satisfied with their experience. This familiarity stems from prior experience and interaction with online systems [101].

It is important to monitor the evolution of the learning process, track the learners interaction with the system, observe and address decreases in engagement or difficulties in understanding by employing measurement tools and mechanism [101]. A feedback collection system allows course designers to identify limitations and improve student experience.

A well-designed system must encourage curiosity, engagement and collaboration by mixing designs to address different learner types and through social and informal learning while minimizing the fear of learning [101].

Content should be delivered using a mix of media types to give students learning material that is suitable to their learning styles [101]. The classical texts can be mixed with audio/video content, interactive media, games, simulators and so in.

Courses should be created in such a manner so that it allows the balance between learning and other activities such as work or family. The course structure and assessment can increase engagement and course completion by having an acceptable length, a flexible format, providing support and using auto-graders that allow quick feedback.

A well-balanced course that uses interactive materials, provides flexibility and support may increase student engagement, reduce the drop-out rate and increase overall student satisfaction.

7. Constructivist teaching

There is now a large consensus amongst expert researchers on learning and on the brain, that we do not learn by passively receiving, and then remembering what we are taught. Instead, learning involves actively constructing our own meanings. We invent our own concepts and ideas, linked to what we already know. This “meaning-making” theory of learning is called ‘constructivism’. When you have learned something you have changed your brain physically. We notice this creative meaning-making process most, when it goes wrong. For example, in mistakes made by learners:

Teacher: ‘Is 19 a prime number?’

Student: ‘Yes’

Teacher: ‘Why?

Student: ‘Because it’s odd.’

These genuine mistakes show ‘meaning making’ in practice. If students just remembered what they were told, they would not make such mistakes, they would either remember or not. Conceptual errors show that we make our own mental constructs, we don’t just remember other people’s.

It is not just students who learn in this constructivist way. Adults also learn in this manner. Adults interpret the meaning in very different ways. If adults or students experience the same lesson, they come away having made very different constructs. It is important to use teaching methods that:

1. require students to form constructs that require them to form their own meaning or interpretation of the material being studied

2. allow the learner and the teacher to detect misconceptions, errors and omissions in learning and correct these.

Most students, and many teachers, have developed a view of learning rather than a constructivist view, making constructivist learning difficult to inculcate. This is considered a real obstacle to effective teaching and learning by renowned theorists such as J. Bruner and Charles Desforges: Assumptions are made where:

  • Knowledge is stuff

  • The mind is a vessel

  • Learning is storing stuff

  • Acquisition only requires listening reasonably attentively, or even just being there.

  • Assessment is stock taking

  • Analogies for learning involve transfer rather than constructivism.

7.1. Meaningful teaching

Learning requires a stage where students are required to process the information given them. They need activities which require them to make personal sense of the material and so construct their own meanings. Research shows that learning activities that require active student processing improve recall, are more enjoyed, and create deeper learning.

Meaning is personal and unique, and is built upon personal prior learning and experience, which differs from student to student. There is no one way to learn something and a variety of tasks and experiences are required to meet individual need. Learners need to think about parts and wholes at the same time, and to integrate topics. It is necessary to cojoin other learning to help make sense of the learning. Skills such as high order reasoning need to be taught along with content, not separately. Learning is enhanced by a challenge but weakened by a threat; Threats cause high order thinking skills to retract.

7.2. Constructivist teaching strategies

Learning should involve activities to process the new material, linking it to what the student already knows. Tasks should be authentic, set in a meaningful context, and related to the real world. They should not just involve repeating back facts as this causes ‘surface’ learning. A lack of authenticity leads to diminished motivation for learning.

As student’s learning will involve errors, tasks should offer opportunities for self-assessment, correction, peer discussion, teacher feedback and other ‘reality checks’. Full focus can only be attained for up to about 20 or 25 minutes maximum. Short breaks and changes of focus help.

High order questions relating to Bloom’s taxonomy require learners to construct their own conceptions of the new material. Learners can’t reason with material until they have conceptualized it, so questions that require reasoning force conceptualization.

8. Present, apply and review (PAR) as a model

The Present, Apply and Review model developed by Geoff Petty [87] is not a new model of teaching, however the modification of the model for e-learning is novel.

8.1. Present

The learner is presented with the new knowledge, concepts, skills, theories, explanations, etc. In a constructivist teaching environment, the teacher will not just explain but support the learner to construct their own explanations by linking to prior knowledge and experiences. Typically, such support would involve talk, video, ICT, reading, peer explanation in standard didactic approach or self-discovery, jigsaw or Socrative questioning if a constructivist approach is used.

Abstract ideas are illustrated with concrete examples. Skills are demonstrated where both process and product are stressed.

The teacher will assess progress by checking learner’s work, asking questions, quick quizzes, etc.

8.2. Apply

The learners are provided with tasks that require them to apply the knowledge, theory, skills, etc that was just presented to them. This is also known as learning by doing. The learner will engage in problem solving, logical reasoning, decision making, evaluation. This process strengthens the linkages with prior learning and experiences, thus deepening the learning. The process involves the creation of a new form to solve the problem, this may be in the form of a mathematical solution, a mind-map, a presentation of results, a poster, etc.

During the application process learner will make mistakes, leave out important information, possibly skip stages in the analysis and evaluation. At this point the teacher would observe learner behaviour, check and correct work in progress, give praise and encouragement, support those needing extra help.

Learning skills are developed through peer marking/explanations, group discussion, case study, exercises, questions, worksheets, decision making, presentation of work, evaluation of exemplars, etc.

The present and apply stages may become intertwined as seen in Jigsaw.

8.3. Review

In this phase the learning is summarised, and any issues clarified. The teacher will emphasise the key points in the learning at the beginning and end of lessons.

Linkages are created and discussed with other learning experiences. Learners are expected to recall knowledge and confirm their understanding of the concepts. Deficiencies will be addressed and corrected to strengthen the learning experience.

Strategies to support the review of learning include additional questioning, summarising salient conceptual or procedural points, linking with further learning, peer explanations, quizzes and tests.

9. Modification of PAR for engaging online teaching

EngiMath has adapted the PAR (Present, Apply, Review) model for online learning. Instead of a typical beginning – middle – end model of learning plan, the PAR model utilizes a different approach. PAR structures the lesson by presenting new material, allowing the student to apply the learning followed by a review of the learning. This structure may be utilized several times within a lesson to maximize the learning potential. Each phase of the learning model is dependent on the other – attention must be given to each such that they support each other. The PAR model is similar to a three-legged stool – if one leg fails the stool will fail. The PAR model has been adapted within EngiMath to include an element of Teacher Presence to support online learners.

It is not possible for learners to concentrate on a specific lesson task 100% of the time. As the lesson progresses the concentration span decreases. This factor is critical in the design of online learning units, where the teacher is not present. It is necessary to break the lesson into constituent components related to time and complexity of task. Simple lesson tasks may require less explanation to support self-discovery, whereas more complex tasks require greater explanation and demonstration of procedures. The EngiMath researchers have carefully explored the complexity of tasks and allocated resources accordingly. Learners are supported through feedback from the formative assessment procedures built into the lesson design. The boundaries for formative and summative assessment may not always be established in concrete leading to a formative-summative design approach; this depends on the curriculum within which the lessons are situated.

To support the need for authenticity of learning a specific information lesson was developed. This lesson describes the application of the lessons in easy, medium and hard levels of difficulty relating to real-world problems.

The lesson begins with an introduction to the application of the theory in practice with supporting links if learners wish to explore further.

The learner is then provided with a choice of application complexity or to be domain specific if they wish.

Following the introductory lesson, the learner may progress through the material using a linear or non-linear approach. It is necessary however, to ensure that the learner has the requisite skills, knowledge and competencies to progress to each stage. Hence, a set of formative-summative exercises are presented to help determine the learning and provide valuable feedback and support where necessary.

The first lesson introduces the basic concepts and builds upon them. At the end of the lesson the learner is offered an opportunity to apply and review their learning progress.

A variety of question types are presented to the learner. This process is purely formative and is aimed at providing scaffolding and encouragement.

Additional question blocks are designed to be accessed after 7 or 8 lessons to determine if the learner has gained sufficient skills, knowledge and competency to progress to the next group of lessons.

10. Analysis of the model for teaching

This section covers the statistical analysis of the surveys and feedback reports. Feedback was collected for the main project activities: the pilot and main course, the student competition and the multiplier events.

10.1. Student evaluation of the pilot course

The pilot course was created with the specific aim of testing and improving the existing materials. Students took part in the course and after going through the theoretical and practical materials were asked about their experience with the help of a feedback form.

The form collected socio-demographic information and assessed the student’s interaction with the theory, practice and assessment materials. Most items used in the questionnaire used 6-point or 7-point Likert scales. The items and the used scale are presented in Table 1. Students were asked to report their level of agreement with certain aspects of the course. Open ended questions were also asked to get customized feedback from students.


Item

Country

Timestamp

Please chose the institution you are from:

What is the name of the study program you are currently enrolled in?

Age

How much experience did you have with the course materials before starting the course?

Is the theory in the online course related to what has been taught in class?

Did you have difficulties in understanding the theoretical materials?

If you had difficulties, please describe them shortly bellow

Do you think that going through the theoretical material will help you with your final exam?

Do you think that going through the theoretical material will help you in practice (real life)?

How would you rate the volume of theoretical material that was provided?

Were the practical materials in accordance with the theory being taught (in the materials of the online course, not in class)

Did you have difficulties in understanding the practical materials? (Were the questions clearly formulated?)

If you had difficulties, please describe them shortly bellow

Did you have enough time to complete the requested tasks?

Do you think that going through the practical material will help you with your final exam?

Do you think that going through the practical material will help you in practice (real life)?

How do you rate the quality of feedback after each question?

How do you rate the amount of feedback after each question?

How would you rate the volume of material that was provided?

Were the assessment materials in accordance with the theory being taught (in the theoretical materials of the online course, not in class)

Did you have difficulties in understanding the assessment materials? (Were the questions clearly formulated?)

If you had difficulties, please describe them shortly bellow

Did you have enough time to complete the requested tasks?

Do you think that going through the assessment material will help you with your final exam?

Do you think that going through the assessment material will help you in practice (real life)?

How do you rate the quality of feedback after each question?

How would you rate the volume of material that was provided?

How much do you agree with the following aspects? [The theoretical materials were easy to use]

How much do you agree with the following aspects? [The practice materials were easy to use]

How much do you agree with the following aspects? [The assessment materials were easy to use]

How much do you agree with the following aspects? [I liked the graphics (images) in the theoretical materials]

How much do you agree with the following aspects? [There was too much text on the slides (theoretical material)]

How much do you agree with the following aspects? [The platform was intuitive and easy to use]

How much do you agree with the following aspects? [The course material had a logical flow]

How much do you agree with the following aspects? [It was easy to go back and forth through the material]

Did you understand what you needed to do in order to pass the course? (Was there enough information provided?)

How can the course be improved?


The questionnaire was translated in the native language and distributed to all partner universities The pilot courses were performed between September 2019 and February 2020. At the time of analysis only results from Estonia, Poland, Spain, Portugal and Romania were available. A total of 95 students answered the survey.

The distribution of students by study program and country can be found in the figure below. Most students were from technical degrees programs but students from other study programs were also involved.

Most students are young (ages between 18 and 29). Romania had the widest spread from an age perspective and had no prior experience in the course subject (44.2%), but students in Estonia and Romania had a broader spread of experiences.

Students were asked if the theory is related to the material presented in class and if practice exercises and assessments are related to the theory. Most students found that the material is strongly related to class material. This was also true for the assessments and practice exercises. 80% of students had high or very high level of agreement (ratings of 4 and 5).

Most students didn’t have any difficulties at all with the material. 83% of respondents said that they had either no difficulties or very little (4 and 5 on the scale). Ten students did encounter difficulties, but each was specific without an identifiable pattern.

Students considered that they had either just the right amount of time or too much time for solving both practice exercises and during the assessment. 60% of respondents said they had just the right amount of time or slightly more (3 and 4 on the scale) while 37% considered they had too much time.

Most respondents (85.6%) considered that this material will help them with the final exam (answered with 4 or 5 on the scale) but 81.4% considered the material was moderately helpful for real life situations (scores 2,3,4 on the scale). Although they found it very useful for passing their exams, the material’s usefulness in practice was perceived more moderately.

The majority of respondents (87.4%) considered they had the right amount of materials (values 2,3,4 on the scale) and 74% considered them easy to use (values 4 and 5 on the scale)

When considering the aesthetics of the materials, students (71%) liked the graphics in the theoretical material. The distribution of answers regarding the amount of text was quite uniform. There was no agreement if there was too much text in the material. More than half (54%) found the platform to be intuitive and that it had a logical flow (78%). Most considered the platform easy to navigate (59%) and that they had enough information (86%). Scores of 4 and 5 were considered as agreement.

Students were asked how the course can be improved. Their answers were analyzed by creating wordclouds that evaluated the frequency of terms used by students. Common words that don’t add value, called stopwords (e.g. the, a, at, of, he, she) were removed prior to the analysis. Most words showed apreciation for the material but they also indicated the desire for more exercises and examples.

The results of the pilot course sugest that the materials are of high quality and that students had a pleasant experience interacting with them Nevertheless some lessons have been learned on how to better improve these materials for future courses.

10.2. Student evaluation of the main course

The main course was split into three modules: Module 1 consisted of lessons 1 to 14 and covered matrices, Module 2 from lesson 15 to 22 and covered determinants and Module 3 lessons 23 to 26 and covered equation systems. Feedback was collected for each module and for the course as a whole. In total 25 respondents gave feedback on the whole course and 21 for each module.

When asked how they would rate the quality of the course overall, all respondents answered with very good or excellent. The theoretical materials were easy to use, with many examples presented in a cleat concise way. The exercises were explained step by step with many graphical representations and provided enough information to allow learning quickly. The design was clean with a logical flow and the information was presented in a useful way. Students appreciated the ability to learn at their own pace. On the other hand, students would have liked more quizzes and examples, a summary at the end and/or beginning of each lesson and more multimedia (audio/video) content. The material could be expanded to more topics and with more complex examples.

Students found information in the engineering applications interesting and the materials easy to use because of the way it was presented and that it had animations. They appreciated the practical applications and that the course presented how this information is applied in real life. Nevertheless, there were some suggestions for improvement. Students would enjoy more examples and some text to be reduced. Some would like some more explanations while others would prefer more complex subjects.

In the practical exercises students appreciated the lack of a time restriction. Some found the exercises easy and well formulated with a good amount of diversity, but some found that there were too many exercises. Overall, students appreciated the opportunity to practice what they have learned in the theoretical materials. Students found some mistakes in the practice exercises that need to be corrected. They also felt that the material could be divided by level of difficulty. Some suggested some improvements in the user experience like the ability to use the keyboard for navigation or having one task per page. Students found useful the feedback where the solution was provided step by step and suggested using it for all exercises.

The feedback for each module was similar to the one about the whole course. Students enjoyed the interactive manner in which the theoretical materials were presented and the course navigation, but would like more examples. One thing that stood out was that students found the third module a bit more difficult and were slightly less satisfied with it.

10.3. Student evaluation of the student competition

Three students from each partner university took part in the student competition. Twelve participating students and three teachers answered a feedback survey that was given to them at the end of the competition. The collected feedback was mostly descriptive and in the form of answers to open ended questions.

All respondents were either satisfied or very satisfied with all the aspects of the competition: the event in general (Median =6 on a scale of 1 to 6), the communication with the organizers (Median =6 on a scale of 1 to 6), the challenges that were solved during the competition (Median =6 on a scale of 1 to 6), the unfolding of the event (Median =6 on a scale of 1 to 6).

Students enjoyed working in an international setting and interacting with students from other universities. Some even made new friends during the competition and got information about other universities and countries. The students liked the challenges and the communication during the event. They appreciated the format of the competition and the flexibility of the schedule and tools that they were allowed to use to solve the challenges. The teachers appreciated the format of the competition and the well thought out content and considered it very professionally organized. Although the participants considered the competition to be well organized, some improvements were suggested. The most frequent suggestion was that the competition be held face to face. The students would have wanted more information on each other and the competition either through the publication of short presentations or using icebreaking sessions. Some had some problems with the communication channels that were eventually resolved.

10.4. Participant evaluation of the Multiplier Event

Feedback was collected from the participants at the multiplier events. In total, 21 participants answered a short online survey after the end of the event. All respondents were teachers, most teaching mathematics.

Most respondents were over 30 years in age (85.7%) and had more than 11 years experience in the field of education. Most had an average level of familiarity with online pedagogy (Median = 3 on a scale from 1 to 5). Most respondents found the event interesting (Median = 4 on a scale from 1 to 5) and considered they have learned something new (Median = 4 on a scale from 1 to 5) and useful (Median = 4 on a scale from 1 to 5). Participants liked the contents of the course and some even had ideas of how to implement something similar in their own classes and they found it interesting that a course can be entirely online. Participants showed their desire to use the presented materials in their own classes. They expressed their wish for longer future events, that are interactive and that would allow interaction with the colleagues from other partner institutions.

11. Discussion and conclusions

Common issues exist for students in each institution. These issues may offer opportunities to explore novel and innovative pedagogies in engineering mathematics if addressed by institutions. Students’ ICT preparedness for Higher Education is taken for granted by many academic designers. The assumption of the digital native is one issue that poses a barrier to learning for students.

Cultural, language, and social barriers exist that need to be accommodated in any engaging pedagogical paradigm. The common language of mathematics is assumed to be necessary to allow for cross-cultural shared programmes, however the common language of mathematics is not sufficient. It is necessary to ensure that local support measures cater for local issues.

Automated assessment only considers the product of assessment and not the process. Smart assessment systems place significant cognitive loads on assessors, assessment designers and students. Traditional didactic processes are the majority in the current pedagogies offered by the partner institutions, and it is expected that these will continue.

The findings provide crucial information regarding priorities for the pedagogical design of the shared program materials. Differences in student expectations and abilities must be accommodated within an engaging paradigm. Academics have different priorities from students in the design and operation of learning and assessment materials. Course materials should give the option to provide access in the local language – sharing across language borders is not straightforward.

On-line assessment tools for mathematics exist however, the cognitive load experienced by academics in the setting up of such tools is considerable. Syntax heavy assessment tools place a burden on low to middle mathematics achievers and pedagogical components within on-line assessment tools are not always visible. Equivalence of the human assessor is not yet possible in mathematics.

The partners are aware of the current limitations of mathematics-oriented software to provide an immersive interactive experience for students and academics. Working within the boundaries of mathematics and addressing social, language, cultural and national concerns within a shared, collaborative programme, the partners focused on student interaction within the learning environment as the foundation for the pedagogical model. The model of on-line mathematical pedagogy was developed within IO3 using this experience combined with the activities in IO2 to provide a paradigm that may be used by academics in the development of engaging and interactive educational experiences.

References

  1. McCawley, P. F. (2009). Methods for Conducting an Educational Needs Assessment, 24. Retrieved from http://www.edtech2.com/Methods%20for%20Conducting%20an%20Educational%20Needs%20Assessment.pdf

  2. Bigbee, J.L., Rainwater, J., & Butani, L. (2016). Use of a needs assessment in the development of an interprofessional faculty development program. Nurse Educator, Vol 41 (6) pp. 324-327.

  3. (2011). Needs Assessment: Tools and Techniques, A guide to Assessing Needs, Chapter 3. Retrieved from http://www.nwlink.com/~donclark/analysis/analysis.html

  4. (2011). Guide to conducting an educational needs assessment: beyond the literature review. Retrieved from https://www.janssentherapeutics-grants.com/sites/all/themes/ttg/assets/Needs%20Assessment%20Guide.pdf

  5. (2016). Methods of assessing learning needs: Gap analysis and assessing learning needs. Retrieved from http://distribute.cmetoronto.ca.s3.amazonaws.com/QuickTips/How-to-Conduct-a-Gap-Analysis.pdf

  6. Jordan, S. (2013). E-Assessment: Past, present and Future, New Directions Vol 9(1), pp 87-106, DOI 10.11120/ndir.2013.00009

  7. Gikandi, J.W., Morrow, D. & Davis, N.E., (2011). Online formative assessment in higher education: A review of the literature, Computers & Education, Vol 57, pp. 2333-2351

  8. Passmore, T., Brookshaw, L. & Butler, H., (2011). A flexible extensible online testing system for mathematics, Australasian Journal of Educational Technology, Vol 27(6), pp. 896-906

  9. Redecker, C. & Johannessen, O., (2013). Changing Assessment – towards a new assessment paradigm using ICT, European Journal of Education, Research, Development and Policy, Vol 48(1)

  10. Ras, E., Whitelock, D. & Kalz, M., (2015). The promise and potential of e-assessment for learning, In P. Reimann, S. Bull, M. Kickmeier-Rust, R. Vatrapu, & B. Wasson (Eds.), Measuring and Visualizing Learning in the Information-Rich Classroom, Oxford, pp. 21-40

  11. Marshalsey, L., & Sclater, M., (2018). Critical perspectives of technology-enhanced learning in relation to specialist Communication Design studio education within the UK and Australia, Research in Comparative and International Education, Vol 13(1), pp.92-116

  12. Presseisen, B.Z., & Kzulin, A., (1992). Mediated Learning--The Contributions of Vygotsky and Feuerstein in Theory and Practice, Retrieved 31/01/2019, https://eric.ed.gov/?id=ED347202

  13. Black, P., & Wiliam, D., (1998). Assessment and Classroom Learning, Assessment in Education: Principles, Policy & Practice, Vol 5(1), pp.7-74

  14. Morgan, C., & Watson, A., (2002). The Interpretative Nature of Teachers' Assessment of Students' Mathematics: Issues for Equity, Journal for Research in Mathematics Education, vol 33(2), pp.78-110

  15. Dearden, R.F., (1979). The Assessment of Learning, British Journal of Educational Studies, Vol 27(2), pp. 111-124

  16. Robles, M., & Braathen, S., (2002). Online Assessment Techniques, Delta Pi Epsilon Journal, Vol 44(1), pp. 39-50

  17. Butcher, P., Hunt, T., & Sangwin, C., (2013). Embedding and enhancing eAssessment in the leading open source VLE, Higher Education Academy.

  18. Pachler, N., Daly, C., Mor, Y., & Mellar, H., (2010). Formative e-assessment: practitioner cases, Computers & Education, Vol 54(3), pp.715-721, DOI 10.1016/j.compedu.2009.09.032

  19. Boud, D., & Molloy, E., (2013). Rethinking models of feedback for learning: the challenge of design, Assessment & Evaluation in Higher Education, Vol 38(6), pp.698-712, DOI 10.1080/02602938.2012.691462

  20. Brown, K., (2017). Myths, rhetoric and opportunities surrounding new teaching technologies: Engineering mathematics education, EDCrunch Conference, Yekaterinburg, Russia

  21. Brown, K., & Lally, V., (2017). It isn't adding up: The gap between the perceptions of engineering students and those held by lecturers in the first year of study of engineering, ICERI 2017, https://library.iated.org/view/BROWN2017ITI

  22. Brown, K., & Lally, V., (2018). Rhetorical relationships with students: A higher education case study of perceptions of online assessment in mathematics, Research in Comparative and International Education, Vol 13(1), pp.7-26, http://journals.sagepub.com/doi/10.1177/1745499918761938

  23. Brown, K., (2017). Virtual Learning Environments in higher education, Internal report, LYIT VLE working group (unpublished)

  24. Sangwin, C., (2012). Computer Aided Assessment Of mathematics Using STACK, 12th International Congress on Mathematical Education, South Korea, Proceedings of ICME

  25. Chakroun, B., & Keevy, J., (2018). Digital credentialing: implications for the recognition of learning across borders - UNESCO Digital Library, programme document retrieved 8/02/2019: https://unesdoc.unesco.org/ark:/48223/pf0000264428

  26. Hart, C. (2012). Factors associated with student persistence in an online program of study: A review of the literature. Journal of Interactive Online Learning, 11, 19–42. Retrieved from http://www.ncolr.org

  27. Yang,D., Baldwin, S., & Snelson, C. (2017). Persistence factors revealed: students’ reflections on completing a fully online program, Distance Education, 38:1, 23-36. Retrieved from https://doi.org/10.1080/01587919.2017.1299561

  28. Hanauer, D.,Graham, M.,J., Hatfull, G.,F. (2016). A Measure of College Student Persistence in the Sciences (PITS). CBE Life Sci Educ. Retrieved from https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5132351/

  29. Bannon, L. J., & Bødker, S. (1989). Beyond the Interface: Encountering Artifacts in Use. DAIMI Report Series. https://doi.org/10.7146/dpb.v18i288.6666

  30. Emin-Martinez, V., Hansen, C., Rodriguez Triana, M. J., Wasson, B., Mor, Y., Dascalu, M., … Pernin, J.-P. (2014). Towards teacher-led design inquiry of learning. ELearning Papers.

  31. Hansen, C. J., & Wasson, B. (2016). Teacher inquiry into student learning: The TISL heart model and method for use in teachers’ professional development. Nordic Journal of Digital Literacy. https://doi.org/10.18261/issn.1891-943x-2016-01-02

  32. Katsamani, M., & Retalis, S. (2011). Making learning designs in layers: The CADMOS approach. In Proceedings of the IADIS International Conference e-Learning 2011, Part of the IADIS Multi Conference on Computer Science and Information Systems 2011, MCCSIS 2011.

  33. Kirschner, P., & van Merrienboer, J. J. G. (2012). Ten Steps to Complex Learning: A New Approach to Instruction and Instructional Design. In 21st Century Education: A Reference Handbook 21st century education: A reference handbook. https://doi.org/10.4135/9781412964012.n26

  34. Lopes, A. P., & Soares, F. (2018). Using Moodle Analytics for Continuous E-Assessment in a Financial Mathematics Course at Polytechnic of Porto. In ICERI2018 Proceedings. https://doi.org/10.21125/iceri.2018.2123

  35. Merriënboe, V. (1997). Training complex cognitive skills: A four-component instructional design model for technical training. In Englewood Cliffs: Educational Technology Publications.

  36. Moradmand, N., Datta, A., & Oakley, G. (2014). The Design and Implementation of an Educational Multimedia Mathematics Software : Using ADDIE to Guide Instructional System Design. The Journal of Appied Instructional Design.

  37. Persico, D., & Pozzi, F. (2015). Informing learning design with learning analytics to improve teacher inquiry. British Journal of Educational Technology. https://doi.org/10.1111/bjet.12207

  38. Sørensen, B. H., Selander, S., Wasson, B., & Wennström, S. (2016). Designs for Learning – Taking a Step Forward. Designs for Learning. https://doi.org/10.16993/dfl.71

  39. Toetenel, L., & Rienties, B. (2016). Analysing 157 learning designs using learning analytic approaches as a means to evaluate the impact of pedagogical decision making. British Journal of Educational Technology. https://doi.org/10.1111/bjet.12423

  40. Van Merriënboer, J. J. G., Kirschner, P. A., Paas, F., Sloep, P. B., & Caniëls, M. C. J. (2009). Towards an integrated approach for research on lifelong learning. Educational Technology Magazine.

  41. Villasclaras-Fernández, E. D., Asensio-Pérez, J. I., Hernández-Leo, D., Dimitriadis, Y., de la Fuente-Valentín, L., & Martínez-Monés, A. (2010). Implementing computer-interpretable CSCL scripts with embedded assessment: A pattern based design approach. In Techniques for Fostering Collaboration in Online Learning Communities: Theoretical and Practical Perspectives. https://doi.org/10.4018/978-1-61692-898-8.ch015

  42. Wasson, B. (2007). Design and use of collaborative network learning scenarios: The DoCTA experience. Educational Technology and Society.

  43. Wasson, B., & Kirschner, P. A. (2020). Learning Design: European Approaches. TechTrends. https://doi.org/10.1007/s11528-020-00498-0

  44. Wastiau, P. (2014). From face to face to online teacher professional development: Paving the way for new teacher training models? Nordic Journal of Digital Literacy. https://doi.org/10.18261891-943x-2014-01-02.

  45. Allen, D., and Tanner, K. (2006). Rubrics: tools for making learning goals and evaluation criteria explicit for both teachers and learners. CBE life sciences education, 5 3, 197-203.

  46. Angelo, T. A, Cross, K. P. (1993) Classroom Assessment Techniques: A Handbook for College Teachers.

  47. Arends, R. (2012). Learning to teach. McGraw Hill

  48. ARG (2002). Assessment for Learning: 10 Principles. Available on the Assessment Reform Group website: www.assessment-reform-group.org.uk.

  49. Bailey, J., Little, C., Rigney, R., Thaler, A., Weiderman, K., & Yorkovich, B. (2010). Assessment 101: Assessment made easy for first-year teachers.

  50. Bennett, R. E. (2011) Formative assessment: a critical review, Assessment in Education: Principles, Policy & Practice, 18:1, 5-25, DOI: 10.1080/0969594X.2010.513678

  51. Black, P., and Wiliam, D. (1998). Inside the black box: raising standards through classroom assessment. Phi Delta Kappan, 80(2), 139-44.

  52. Black, P. and Wiliam, D. (2009) "Developing the theory of formative assessment", Educ. Assessment Eval. Account, vol. 21, no. 1, 5-31.

  53. Black, P., Harrison, C., Lee, C., Marshall, B., & Wiliam, D. (2003). Assessment for learning: Putting it into practice. Buckingham: Open University Press.

  54. Boston, C., & ERIC Clearinghouse on Assessment and Evaluation, C. D. (2002). The concept of formative assessment. ERIC Digest

  55. Broadbent, J., Panadero, E., & Boud, D. (2018). Implementing summative assessment with a formative flavour: a case study in a large class. Assessment & Evaluation in Higher Education, 43, 307 – 322

  56. Brookhart, S. M. (2008). How to give effective feedback to your students. Alexandria, VA: Association for Supervision and Curriculum Development (ASCD).

  57. Buchanan, E.A. (2004). Online assessment in higher education: Strategies to systematically evaluate student learning. Distance Learning and University Effectiveness: Changing Educational paradigms for Online Learning. Hershey, USA: Information Science Publishing, p. 163-76

  58. Clark, I. (2011). Formative assessment: Policy, perspectives and practice. Florida Journal of Educational Administration and Policy, 4(2), 158-180.

  59. Dirksen, D. J. (2011). Hitting the reset button: Using formative assessment to guide instruction. Phi Delta Kappan, 92(7), 26-31.

  60. Dixson, D.D., & Worrell, F.C. (2016). Formative and Summative Assessment in the Classroom. Theory Into Practice, 55, 153 – 159

  61. Downing S.M. (2002) Assessment of Knowledge with Written Test Forms. In: Norman G.R. et al. (eds) International Handbook of Research in Medical Education. Springer International Handbooks of Education, vol 7. Springer, Dordrecht. Available: https://doi.org/10.1007/978-94-010-0462-6_25

  62. Fisher, D., and Frey, N. (2014). Checking for understanding: formative assessment techniques for your classroom. Alexandria, VA: ASCD.

  63. Gauntlett, N. (2007). Literature Review on Formative Assessment in Higher Education. Available: http://proiac.sites.uff.br/wp-content/uploads/sites/433/2018/08/feedback_assessment_higher_educ.pdf

  64. Glazer, N. (2014). Formative plus Summative Assessment in Large Undergraduate Courses: Why Both? The International Journal of Teaching and Learning in Higher Education, 26, 276-286.

  65. Greenstein, L. (2010). What teachers really need to know about formative assessment. Moorabbin, Vic.: Hawker Brownlow.

  66. Houston, D., & Thompson, J. (2017). Blending Formative and Summative Assessment in a Capstone Subject: "It's Not Your Tools, It's How You Use Them". Journal of university teaching and learning practice, 14, 2

  67. Huxham, M. (2007). Fast and effective feedback: Are model answers the answer? Assessment and Evaluation in Higher Education, 32(6), 601-611. doi: 10.1080/02602930601116946

  68. Iannone, P. and Jones, I. (2017) Special issue on summative assessment, Research in Mathematics Education, 19:2, 103-107, DOI: 10.1080/14794802.2017.1334578.

  69. Klute, M., Apthorp, H., Harlacher, J., Reale, M. (2017). Formative assessment and elementary school student academic achievement: A review of the evidence. Regional Educational Laboratory Central, National Center for Education Evaluation and Regional Assistance. Available: https://files.eric.ed.gov/fulltext/ED572929.pdf

  70. Kibble, J. (2017). Best practices in summative assessment. Advances in physiology education, 41 1, 110-119.

  71. Knight, P. (2002). Summative Assessment in Higher Education: Practices in disarray. Studies in Higher Education, 27, 275 - 286.

  72. Looney, J. W. (2011), Integrating Formative and Summative Assessment: Progress Toward a Seamless System?, OECD Education Working Papers, No. 58, OECD Publishing. doi: 10.1787/5kghx3kbl734-en

  73. McManus, S. (2008). Attributes of effective formative assessment. Washington, DC: Council for Chief State School Officers (CCSSO).

  74. Nicol, D.J. and Macfarlane-Dick, D. (2006) Formative assessment and self‐regulated learning: a model and seven principles of good feedback practice. Studies in Higher Education, 31(2): 2-19.

  75. Nolen, S. (2011). The role of educational systems in the link between formative assessment and motivation. Theory Into Practice, 50(4), 319-326.

  76. Sadler, D.R. (2009). Formative assessment: Revisiting the territory. Assessment in education, 5(1), 77-84.

  77. Scriven, M. (1967). The methodology of evaluation. In R. W. Tyler, R. M. Gagné & M. Scriven (Eds.), Perspectives Of Curriculum Evaluation (Vol.1, pp. 39-83). Chicago, IL: Rand McNally.

  78. Shute, V.J. (2008). Focus on formative feedback. Review of Educational Research, 78(1), 153-189.

  79. Taras, M. (2009). Summative assessment: the missing link for formative assessment. Journal of Further and Higher Education, 33, 57 - 69.

  80. Theall, M. and Franklin J.L. (2010). Assessing Teaching Practices and Effectiveness for Formative Purposes. In: A Guide to Faculty Development. KJ Gillespie and DL Robertson (Eds). Jossey Bass: San Francisco, CA.

  81. Trumbull, E., & Lash, A. (2013). Understanding formative assessment: Insights from learning theory and measurement theory. San Francisco: WestEd.

  82. Vleuten, C.V. (1996). The assessment of professional competence: Developments, research and practical implications. Advances in health sciences education: theory and practice, 1 1, 41-67.

  83. Wiliam, D. (2012). FEEDBACK: Part of a System. Educational Leadership, 70(1), 31-34.

  84. Wininger, S. R. (2005). Using your tests to teach: Formative summative assessment. Teaching of Psychology, 32, 164–166.

  85. Wiggins, G. (2012). Seven keys to effective feedback. Educational Leadership, 70(1), 10-16.

  86. Zuiker, S., & Whitaker, J.R. (2014). Refining Inquiry with Multi-Form Assessment: Formative and summative assessment functions for flexible inquiry. International Journal of Science Education, 36, 1037 - 1059.

  87. Jaworski, B.: Theory and Practice in Mathematics Teaching Development: Critical Inquiry as a Mode of Learning in Teaching. Journal of Mathematics Teacher Education, 2006, 9, 187–211. https://doi.org/10.1007/s10857-005-1223-z.

  88. Jaworski, B.: Inquiry-based practice in university mathematics teaching development. In Potari D., Chapman O. (Eds.), International Handbook of Mathematics Teacher Education: Volume 1 - Knowledge, Beliefs, and Identity in Mathematics Teaching and Teaching Development, 2019, 275–302. Publisher: Brill | Sense.

  89. Dermo J.: e-Assessment and the student learning experience: A survey of student perceptions of e-assessment. British Journal of Educational Technology, 2009, 40, 203−214.

  90. Bocanet, V.I., Brown, K., Uukkivi, A., Soares, F., Lopes, A.P., Cellmer, A., Serrat, C., Feniser, C., Serdean, F.M., Safiulina, E., Kelly, G., Cymerman, J., Kierkosz, I., Sushch, V., Latõnina, M., Labanova, O., Bruguera, M.M., Pantazi, C., Estela, M.R.: Change in Gap Perception within Current Practices in Assessing Students Learning Mathematics. Sustainability, 2021, 13, 4495. https://doi.org/10.3390/su13084495

  91. Bescherer, C., Herding, D., Kortenkamp, U., Müller, W., Zimmermann, M.: E-learning tools with intelligent assessment and feedback for mathematics study. In Graf S., Lin, Kinshuk, . McGreal R. (Eds.), Intelligent and adaptive learning systems: Technology enhanced support for learners and teachers. Hershey, PA: IGI Global, 2012, 151-163, http://doi:10.4018/978-1-60960-842-2.ch010.

  92. Toktarova, V. I.: Assessing the efficiency of teaching mathematics in the e-learning environment. Proceedings of 6th International Conference on Education and Social Sciences, 2019, 428-431.

  93. Challis D.: Committing to quality learning through adaptive online assessment. Assessment and Evaluation in Higher Education, 2005, 30, 519-527.

  94. Prakash L. S. , Saini D. K.: E-assessment for e-learning. 2012 IEEE International Conference on Engineering Education: Innovative Practices and Future Trends (AICERA), 2012, 1-6, doi: 10.1109/AICERA.2012.6306696.

  95. Peres, P., Lima, L., & Lima, V. (2014). B-learning quality: dimensions, criteria and pedagogical approach. FormaMente n. 1-2/2014: Rivista internazionale di ricerca sul futuro digitale, (1-2014), 117.

  96. Amo, D. (2013, November). MOOCs: experimental approaches for quality in pedagogical and design fundamentals. In Proceedings of the First International Conference on Technological Ecosystem for Enhancing Multiculturality (pp. 219-223).

  97. EFQUEL. (2011). UNIQUe guidelines. Retrieved from http://unique.efquel.org/files/2012/09/UNIQUe_guidelines_2011.pdf

  98. EADTU. (2012). Quality Assessment for E-learning: a Benchmarking Approach - Second edition. European Association of Distance Teaching Universities

  99. SEEQUEL. (2004). Quality Guide to the non-formal and informal Learning Processes. Scienter MENON Network

  100. QM. (2011). Quality Matters Rubric Standards 2011 - 2013 edition. Maryland Online, Inc. Retrieved from http://www.qmprogram.org/files/QM_Standards_2011-2013.pdf

  101. Idemudia, E. C., Adeola, O., & Achebo, N. (2019). The online educational model and drivers for online learning. International Journal of Business Information Systems, 32(2), 219-237.

  102. Lee, S.J., Srinivasan, S., Trail, T., Lewis, D. and Lopez, S. (2011) ‘Examining the relationship among student perception of support, course satisfaction, and learning outcomes in online learning’, The Internet and Higher Education, Vol. 14, No. 3, pp.158–163, DOI: 10.1016/j.iheduc.2011.04.001

  103. Zimmerman, B.J. and Schunk, D.H. (2011) Handbook of Self-regulation of Learning and Performance, Edited by Zimmerman, B.J. and Schunk, D.H., Taylor and Francis, New York.

  104. Greene, J.A. and Azevedo, R. (2009) ‘A macro-level analysis of SRL processes and their relations to the acquisition of a sophisticated mental model of a complex system’, Contemporary Educational Psychology, Vol. 34, No. 1, pp.18–29, DOI: 10.1016/j.cedpsych.2008.05.006.

  105. Shen, D., Cho, M-H., Tsai, C-L. and Marra, R. (2013) ‘Unpacking online learning experiences: Online learning self-efficacy and learning satisfaction’, The Internet and Higher Education, Vol. 19, pp.10–17, DOI: 10.1016/j.iheduc.2013.04.001.

  106. Ergul, H. (2004) ‘Relationship between student characteristics and academic achievement in distance education and application on students of Anadolu University’, Turkish Online Journal of Distance Education TODJE, Vol. 5, No. 2, pp.81–90, DOI: http://tojde.anadolu. edu.tr/yonetim/icerik/makaleler/137-published.pdf.

  107. Petty, G. Teaching Today, 3rd Ed, Nelson Thornes, 2004, ISBN 0748785256