PLENARY 2 SYNTHESIS


The Plenary Conversations 2 of the UP System GE Conference on May 11, 2021 began with a presentation on the UP System Program Evaluation— its objectives and design— by the Technical Working Group (TWG) on GE Evaluation. The basic objectives of the evaluation are: to determine and monitor how GE courses meet the expected students’ learning outcomes, evaluate how the program has met its expected program outcomes, and assess the implementation of the program.



The framework and preliminary findings of the evaluation discussed by Dr. Fernando Paragas of the Department of Communication Research at the College of Mass Communication, UP Diliman, are results of the student and faculty GE surveys. Before discussing the results, an overview of the aims, method, and trajectory of the different aspects of the evaluation (e.g., for students, faculty, and administration) were also laid down to provide a bigger picture of the TWG’s plans.



The preliminary results, as explained by Dr. Paragas, could be used to further explore significant areas of the respondents’ GE experience:



For the Faculty:

· The difference in the way they teach their GE and non-GE courses;

· The relationship between the GE and non-GE courses they teach or how they perceive the GE curriculum in general;

· The meaning of “GE experience” for them and for their students;

· The support mechanisms needed by GE teachers; and

· The enjoyment they derive from teaching their GE subjects.



For the Students:

· How they perceive the delivery of the program;

· How they feel about the materials and assessment tools in their GE courses;

· How they view the complementarity of the GE and non-GE courses and the number of GE courses in relation to that of the non-GE courses;

· The relevance of the GE program in their lives; and

· The memorable aspects of their GE experience.



After the presentation, there was an open forum moderated by Prof. Farah Cunanan of the Department of Linguistics, College of Social Sciences and Philosophy, UP Diliman. There were several questions raised and suggestions made by the participants during the discussion. In general, all of these were related to the objectives, framework, and research design of the evaluation.


Below were the questions raised by the participants:

1. Is there sufficiency of data? What about programs with few students?

2. How can a standard evaluation be achieved if the CUs have different enabling environments?

3.How will the evaluation “measure” nationalism and social justice as outcomes of the GE Program when these are not really clearly defined?

4.Will the results be used to help students cope with the current learning challenges?

5. What are the implications of the survey on the SET?

· will there be a possible revision of the SET and other forms of evaluation as a result of the evaluation results?

· will the evaluation data be used for PBB?


In response to the question on the SET and its possible linking to the evaluation, the TWG on the SET explained the following:

· the latest version of the SET rolled out last year had modifications not to address issues of online teaching during the pandemic. The committee’s discussions on the SET in relation to “remote learning” were in the context of online learning in the Open University.

· the results of the latest version of the SET may be used by each CU for particular purposes (e.g., promotions, developing training workshops to address issues on communication)

· the validity of the latest SET, however, has not yet been determined so its results must not be used for purposes other than those identified [by the TWG].



6. How do we mentor students on choosing their GE courses considering issues on mental health and how do we ensure honor and excellence in the GE program given the set up of remote learning?


7. Could areas of concern be identified or an initial/preliminary conclusion based on the findings be provided?


Below were the suggestions made:

  1. There is a need to strengthen and clarify the problem, hypothesis, parameters, and research design of the evaluation.

· the 30% response rate is a “standard” but different industries have different standards; a review of related literature could help the committee set the standard response rate for the evaluation;

· is what is being evaluated the GE experience or the students and faculty?

· when and where will the data of the evaluation be used?

· what will be the next step after the data yielded by the evaluation are analyzed?

2. A SWOT analysis could also be conducted to see the weaknesses of the GE program and carry out measures to improve it.

3. A cross-sectional or longitudinal analysis for a rich comparison of data across years and variables may be done.

4. The thematic analysis of the qualitative data must be validated.

5. There is a need to provide a description of the “GE experience” from the point of view of students and faculty across CUs.

6. Instead of an evaluation, a “climate survey” to determine how people experience GE at this point of time may yield better results.

· results can be more focused

· analyses per CU and comparisons between freshmen and senior answers may be done

· areas of concern may be identified from the perceptions of the respondents



7. SET data on GE and non-GE courses may be mined instead of asking students to keep on filling out survey forms

· Is it feasible to link the evaluation to student and faculty records?



There is a tendency for an evaluation like the one crafted by the TWG on the GE Program to be disaggregated. Comparisons might be “lost” so the surveys conducted could be further limited and refined. The challenge is crunching the numbers in the right way. As a “climate evaluation,” the surveys are appreciated but let it not be used to evaluate teachers because doing so will not lead to an evaluation of the program. An evaluation like this, however, is needed to encourage the faculty to keep on think about the GE program.



The TWG addressed the questions raised and engaged with the comments and suggestions made by the participants. Below are the points raised by the TWG members.



On how the data of the evaluation/survey will be used

· Comparison of data based on variables will be possible and a “testing” of the answers against the hypothesis will be done

On sufficiency of data

· Ideally, there should be a 30% response rate but we hope that when times are more stable, we will get a higher response rate.

· At the department level, other qualitative modes of evaluation may be deployed to complement the data of the TWG.



On achieving a standard evaluation considering the differences inthe enabling environments of the CUs

· The TWG will give the different campuses/colleges/institutes the survey data so these may be used for their particular needs/aims.

· A workshop on how to use the data will be helpful.



On how to measure nationalism and social justice when these are not defined explicitly

· It is acknowledged that there is a problem with any evaluation which assumes that respondents interpret and understand concepts in the same way.

· To address the problem regarding assumptions, a mixed-method approach is needed. FGDs and qualitative interviews are also crucial in the evaluation.


On linking the evaluation survey to student and faculty records

· A linking should really be done but issues of data privacy and data privacy protocols have always been a challenge to the TWG.


On how to maintain honor and excellence in the context of online learning and how to mentor students in their choice of GE courses

· The earlier GE conversations provided the participants with ideas on how to integrate values of honor and excellence in teaching.

· Students may be guided in their choice of GE courses through advising.


On the implications of the evaluation on promotions and benefits (e.g., PBB)

· We ought to be wary of evaluations that are tied to [faculty] benefits because the former are never perfect and should not to be tied to [the teachers’] livelihood.



On giving a tentative conclusion

· It would be difficult to provide a conclusion, even a tentative one. The data are still raw.

· Some data are “immediately” useful (e.g., the faculty survey which could make the faculty reflect on their teaching) compared

to others (e.g., student survey results which need to fleshed out).

· The results/data have to be “cleaned” first for bi-variate and multi-variate analyses.

· We wish to request the different CUs and their colleges to encourage more participation in the surveys among their faculty and

students so a more accurate interpretation could be arrived at. This will also result in a better analysis of data and their

implications, and the formulation of stronger recommendations to improve the GE program.



The open forum was an opportunity for the TWG to listen to the faculty’s wish list as regards the GE Program. Attitudinal surveys like the KAVS (Knowledge, Attitudes, and Values Survey), which became the basis of the Pahinungod, will help the TWG formulate recommendations to improve the GE Program.



The conversations ended with a few more reminders regarding the nature of the evaluation (e.g., that the data derived from the surveys are not course specific). We may use the evaluation on different sets of students at different times, and the variations in results will be accounted for by the different contexts (e.g. the pandemic) of the evaluation. A future FGD with the faculty will hopefully enable the TWG to determine what the former wish to know about the GE program. Moreover, if we continue conversations among the faculty such as what happened during the open forum, the TWG will know what questions to ask and how to ask them. Evaluations are iterative and will be useful when related to the implementation of any program such as the GE.




Prepared by Prof. Ruth Jordana L. Pison