Introduction
After LESSON, YOU MUST be able to:
determine the objectives of an institutional competency evaluation;
identify the parts of an Institutional Competency Evaluation Tool
Evaluation is a very significant element of the teaching learning process. This is done to verify the acquisition of knowledge, skills and attitude needed acquired from the training. As a trainer, it is a must that you know how to test or verify that assessment criteria addressed during the training.
Institutional Competency Evaluation
Institutional Competency Evaluation is the assessment of the knowledge, skills and attitudes acquired from the training. In CBT, evaluation is the systematic collection and analysis of data needed to make decisions whether a trainee is competent or not yet competent.
The Institutional Competency Evaluation is administered by the trainer within the training duration. Trainees should be evaluated every after competency. No trainee should be allowed to transfer to another competency without having been assessed. For the purpose of CBT, assessments are usually given for the following purposes:
To validate the current competencies of trainees
To measure how much trainees have learned in the training sessions given
To help diagnose trainee’s problems and guide future instruction
To decide whether trainees are competent or not
The competency evaluation tool should be carefully developed so that it will be able to assess the four dimensions of competency such as the:
Task Skill
Task Management Skill
Job Role and Environment Management Skill
Contingency Management Skills
An analysis of the Modules of Instruction or the Competency Standards is critical in the preparation of the assessment tool. Performance criteria for the competency are the main basis for the competency assessment. You should carefully examine your competency standards so that these criteria are included as a part of the evidences to be gathered during assessment.
1. Reliability
This refers to consistency of scores by the same person when re-examined with the same test on different occasion. Your test is reliable if your test is consistent in testing what it is trying to test.
Factors that may affect Reliability
a) Length of the test – the longer the test the higher the reliability.
b) Difficulty of the test – the bigger the spread of the scores the more reliable the measured difference is likely to be. Items should not be too easy or too difficult.
c) Objectivity – this is achieved if scores are independent of the subjective judgment of individual examinees.
To increase the reliability of the written test we do item-analysis. That is analyzing the degree of difficulty and the index of discrimination of the test items. Standard written test items should not be too easy nor too difficult and it should discriminate those who learned from those who did not learn anything.
2. Validity
This is the degree to which the test actually measures what it purports to measure. It provides a direct check on how well the test fulfils its functions.
Factors that influence the validity of test:
a) Appropriateness of test items;
b) Directions;
c) Reading vocabulary and sentence structures
d) Difficulty of items;
e) Construction of test items – no ambiguous items or leading items;
f) Length of the test – sufficient length;
g) Arrangement of items – from easy to difficult; and
h) Patterns of answers – no patterns
To ensure the validity of the evaluation tool, prepare an Evidence Plan based on the CS. To increase the validity of the written test, you should prepare a table of specifications.
3. Objectivity
The test must be fair to all the examinee.
4. Discrimination
It must pick up the good examinees from the poor
5. Ease of Administration and Scoring
The test must have the right length and level of sophistication to do the job.
After reading this LESSON, YOU MUST be able to:
explain the purpose of preparing an evidence plan;
determine the sources of the contents of the evidence plan;
identify methods appropriate for evaluating a performance criteria.
One essential part of the Competency-Based Training Delivery is the institutional assessment. Assessment is the process of collecting evidence and making judgments on whether competency has been achieved. The purpose of assessment is to confirm that an individual can perform to the standards expected in the workplace as expressed in the relevant competency standards.
In this lesson you will learn how to prepare the evidence plan of your competency.
In developing evidence gathering tools for an institutional assessment, the first stage is to prepare an evidence plan. Evidence plans are designed to –
Serve as a planning tool
Support the assessment process
Assist with the collection of evidence
Inform the learners of what is expected of them before they begin the assessment
Serve as a guide for the trainer in determining the method of assessment to be used
In making an Evidence Plan you should have the Competency Standards (CS) of the chosen competency and the Evidence Plan Template.
Learning Objective:
After reading this LESSON, YOU MUST be able to:
define table of specification;
discuss the importance of preparing a table of specifications;
determine the parts of the table of specification; and
explain how the table specification is prepared.
The Evidence plan is a plan for the institutional evaluation tool. After preparing the evidence plan, we are now ready to prepare for the development of the other parts of the evaluation tool such as the written test.
To ensure the validity of your test, you should prepare a table of specification so that all contents to be tested have a representative question.
In this lesson, you will learn how the table of specification is prepared.
A table that shows what will be tested (taught) is the table of specifications. For our purpose of institutional evaluation, we shall be preparing a table of specifications for our written test. This will help us plan how many items we need to prepare to cover all the contents or objectives that we need to assess based on the evidence plan you previously prepared.
A table of specifications is a two-way table that matches the objectives or content you have taught with the level at which you expect students to perform. It contains an estimate of the percentage of the test to be allocated to each topic at each level at which it is to be measured. In effect we have established how much emphasis to give to each objective or topic.
Parts of the Table of Specification
1. Objectives/Content/Topic – these are the content
2. Levels of learning – your questions shall be divided into the levels of learning: knowledge, comprehension and application.
o Factual / Knowledge – recognition and recall of facts
Example:
The figure 1 in the symbol E6013 signifies
A. Tensile strength
B. Welding position
C. Material thickness
D. Maximum weld length
o Comprehension - interpret, translates, summarizes or paraphrase given information
Example:
The megger is used to
A. Measure the amount of illumination
B. Determine the speed of electric motor
C. Measure the resistance of a lightning cable
D. Test the insulation resistance of a circuit
o Application - uses information in a situation different from original learning context
Example
To measure the voltage of a circuit, you connect
A. A voltmeter across the line
B. An ammeter across the line
C. A voltmeter in series with the line
D. An ammeter in series with the line
3. Percentage/number of items
We also have to take into account the type of thinking skills we wish to assess. Whether you use Bloom's taxonomy or another structure, the levels of learning can help you identify the types of questions (or other type of assessment) that are appropriate. For ease of use we have used only three levels: knowledge (recall or recognition), comprehension (or understanding) and application (or skill), and labeled the columns accordingly. The important thing is to use levels of thinking that are relevant for your students and have been incorporated in your instruction. At this stage it can be helpful to mark an "x" or make a check mark in the cells to show the levels at which each objective will be measured, as shown in the example below.
At this point we recognize that 25% of our test is to be on knowledge, 35% on comprehension, and 40% on application. This does not mean that we must have 25 knowledge questions; it does mean that the score on the test will reflect comprehension and application in equal amounts, and knowledge to a lesser degree than knowledge or application.
It may be that at this point you want to compare the test(s) provided by the textbook publisher with your completed table of specifications. If they match and you think the questions are well written, you may decide to use the test (or parts of the test) provided with the text. On the other hand, you may find that it will be necessary for you to create a test to provide an accurate assessment of what the students in your class have learned.
One question frequently asked is how many questions are needed to adequately sample the content representing an objective or topic. Increasing the number of questions increases the probability that we will have a good estimate of what the learner knows and can do. When translated to number of items per topic, the Table of Specifications for a 40 - item test may look like this:
For purposes of validating the current competencies of the trainees or for identifying mastered contents, item placement maybe identified in the Table of Specifications for easier analysis. At this point you also have to decide how many questions are needed to measure learning, what type of questions will be asked and whether a written assessment is sufficient to measure the competency. In most cases, for skills training, performance evaluation with interview maybe more appropriate as an assessment instrument but the effectiveness of written assessment instruments maybe harnessed through the ingenuity and skills of the trainer. If however, the trainer decides for a performance evaluation, it should be reflected in the evidence plan.
Learning Objectives:
After reading this Lesson, you must be able to
explain the advantage of preparing a reliable test item;
determine the type of test appropriate for testing knowledge contents;
enumerate guidelines in preparing a written test.
Evaluation of competency should be assessing the knowledge, skills and attitude. Written test is a method of assessment which can measure knowledge, skills and attitude learned in a training program but sometimes trainers fail to develop questions to test the level of skills and attitude.
In this lesson, we will discuss some tips and guidelines in preparing the written test. The written test that you will write after this lesson should follow the guidelines in preparing a test item.
<a href="https://stories.freepik.com/education">Illustration by Freepik Stories</a>
In developing test items, always consider the five (5) characteristics of good test – validity, reliability, objectivity, discrimination and ease of administration and scoring. As in the construction of a workable and functional project in shop work, test construction should follow the same steps. In the construction of a competency assessment instrument, the following steps are recommended:
1. Examine the established Training Regulations and determine your objectives. This will help in the analysis of the basic skills and knowledge requirements of the trade.
2. Construct the table of specifications. This will be your blue print in constructing individual test items, it will serve as a guide in the preparation of a set of competency assessment methodology for a certain trade.
3. Construct test items more than the number required for a set of Competency Assessment Instrument. This will facilitate item banking and will give an allowance for correction when the test items will be deliberated whereby some items might be deleted.
4. Assemble the items for the test. After grouping the items by type, arrange them such that related items are together. The reason for this is obvious, it saves examinee time as the test is taken and it will be easier to point out where the examinee had failed. In assembling items for the test the speciation table should be followed.
5. Write clear and concise directions for each type of questions. The direction should tell the examinee what to do, how to do it and where to place the responses. They should also contain an example taken from the subject matter being tested.
6. Study every aspect of the assembled test. After the test is assembled and directions are written, it is a good policy to lay it aside for several days, then pick it up again and review each part critically. Consider each item from the point of view of the workers who will take the competency assessment. Try to determine those items that are ambiguous. Check the grammar and be sure that the words used will be understood by the workers who will take the competency assessment.
The written test that we shall prepare as a part of the institutional assessment will largely measure the acquisition of knowledge. Skills and attitude shall be measured using performance tests with questioning.
Learning Objectives:
After reading this LESSON, YOU MUST be able to:
define performance evaluation;
differentiate the procedures of a Job Sheet from that of the instruction for demonstration in an institutional competency evaluation.
Evaluation of competency covers knowledge, skills and attitudes. To assess knowledge, we can use written test as a method of assessment but to effectively assess the skills and attitudes acquired by the trainee in CBT, we should use performance evaluation which will include a demonstration of the skill and an interview to follow - up demonstration.
In this lesson, the format and structure of the prescribed performance test shall be discussed to help you develop your own instructions for demonstration.
It is the formal determination of an individual’s job-related competencies and their outcome. Performance evaluation is accompanied with interview questions which are used during the actual conduct of the test. This is to support the evidences gathered by the facilitator / trainer.
This is the practical portion of the competency assessment instrument. This part measures the skill possessed by the examinee in relation to the occupation. It consists of General and Specific Instructions, the List of Materials, Equipment/Tools and the Marking Sheets.
A. GENERAL INSTRUCTIONS
This refers to the overall conduct of the test (before, during and after) which concerns both the testing officer and the examinee. This part of the competency assessment specifies the does and don’ts inside the testing area.
Performance or what must be done
The conditions or what is given
The standard of performance expected of the examinee
B. SPECIFIC INSTRUCTIONS
This provides the instructions which the examinee must follow in the performance of the test.
C. LIST OF MATERIALS, EQUIPMENT
This provides the listing of the materials, equipment / tools needed in the performance of the skills test. This contains also the complete specifications of each item in the listing.
Pointers to follow in the construction/formulation of a good Test of Skills:
1. The test coverage must be consistent with the job description and skills requirements.
2. The test must not take more than 8 hours to complete.
3. the test statement must specify the exact time within which he examinee is expected to finish task and the tools/equipment that will be issued to the examinee.
4. The work performance/specimen or whatever is being tested must be observable and measurable.
5. The test should be feasible. Do not design tests which makes use of rare or too expensive equipment.
6. Where applicable there must be a working drawing which is clear and accurate.
7. The standard performance outcome if possible, should be stated such as surface finish, clearance or tolerance and number of allowable errors.
8. Directions must be clear, simple, concise and accurate.
Learning Objectives:
After reading this LESSON, YOU MUST be able to:
determine the purpose of the questioning tool;
enumerate the types of questions that are in the questioning tool.
explain how corroboration of evidences will be achieved using the questioning tool.
Corroboration of evidences should be achieved when gathering evidences of competency. In case evidences from the written test and the performance test results are not enough to decide for the competency of a trainee, the questioning tool should be used.
In this lesson, we shall discuss the structure of the questioning tool so that it will help the trainer gather evidences of knowledge, skills and attitude and the four dimensions of competency needed for the competency being assessed.
The questioning tool is a must in an institutional competency evaluation tool package. This will be used to verify evidences that were not clearly demonstrated in the other methods of assessment such as in the written test and the performance test.
The questioning tools should be able to evaluate the four dimensions of competency. To be able to do this your questioning tool should contain questions:
1. to follow-up the demonstration of task skills and task management skills.
All possible questions should be written here. Although the trainer is not required to ask questions that are already observed in the demonstration of skills, you should write all possible questions so that these questions are ready for use.
2. to verify OHS practices.
Safety practices are very important aspect of the demonstration. List down questions on safety related to the competency being assessed. Questions should concentrate on safety practices for the competency being assessed.
3. to verify Job Role and Environment management skills.
Questions that will verify the responsibility of the worker towards his customers, co-employee, employer and the environment are very important because oftentimes this dimension of competency needs to be verified using these questions. They are not demonstrated in most demonstration test.
4. to gather evidences for contingency management skills.
Infrequent events may arise from the job that would need the worker to adjust. These are the contingency management skills questions that you need to construct to verify this dimension of the competency.
5. on knowledge of laws, rules and regulations.
Knowledge of Laws, rules and regulations critical to the job should also be verified. Prepare questions to gather evidences for the specific competency.