The COVID-19 crisis obliged us to postpone the Ethical Forum 2020.
It is now scheduled on Thursday December 9, 2021.
However a new initiative was organised on December 3. 2020,
the "Lunchtime Ethical Forum",
a shorter discussion on an actual ethical subject. This event was organized online.
More information: click here.
Thursday December 7, 2021, 2 - 6 pm (postponed from 2020)
19th Ethical Forum of the University Foundation
Fair exams in a mass university: can technology get us out of trouble?
Evaluating our students belongs to our most important tasks. It is needed to help them find out how well they learned. It is needed to motivate them to work enough. And it is needed to certify what competences they possess at the end of their studies.
More than ever, however, evaluating our students belongs also to our most thankless tasks. While student numbers kept swelling, the number of teachers, and hence of evaluators, did not grow accordingly, and the time they could spend evaluating grew even less, as a concern for the excellence of their institutions required many of them to make research their first priority.
Combined with the emergence of new technologies and with a pressure to make examinations more “objective” in order to avoid time-consuming appeals by disappointed students, this led to a gradual yet massive shift in evaluation methods. With significant variations across disciplines, our universities moved from oral exams to written exams, from open questions to multiple choice questionnaires, from handwritten tests in class to online exercises from home. Many find such trends regrettable, because they make the evaluation of students’ competences less fair and reliable and gradually replace the collective intelligence of examination juries by the artificial intelligence of anonymous algorithms.
Is it nonetheless possible that a good use of technological innovations may enable our universities to better fulfil the various functions of evaluation — feedback, incentive, certification — despite the massification of their student populations?
Is it even possible that the development of learning analytics — the collection and analysis of large data sets for the sake of understanding and improving the learning process — may enable us to evaluate our students in less time, yet more fairly and more reliably than in the past?
What have we learned from the organization of exams during the COVID-19 crisis?