Towards designing a summative assessment that tests transfer of learning
1- Project background
Like detailed in the teaching experience section, I have been working as an associate professor in Arctic marine ecotoxicology for the past two years. In autumn 2016, I ran for the first time a new course in ecotoxicology (BIO-2012) for third-year bachelor students (mandatory for students specializing in environmental management and optional for other biology students). This theoretical course, which consists of 32 hours of lectures and 10 hours of seminars, is validated by a 4-hour written exam. The 2017 cohort consisted of nine students: six Norwegian students enrolled in the environmental management bachelor programme and three Erasmus students. The course was therefore taught in English.
I have experienced this bachelor course as quite challenging for many reasons. The first challenge was undoubtedly due to the course being recently implemented. But the main challenge was definitively imputable to the fact that I did not know the prior knowledge of the students. Among the bachelor students, there is a large variation in theoretical background, motivation and long-term expectations.
When I was asked to choose a development project for the pedagogy competency course, it naturally occurred to me that this course could be the perfect topic. More specifically, I decided to work on improving the written exam. In December 2016 (first year that the course was run), the exam was very much based on repeating basic knowledge and overly lacked an assessment of the deeper understanding of the students and how they appropriated the knowledge. In other words, I think that my exam failed at assessing their long-term learning. I have therefore aimed my development project at designing a meaningful assessment that could enhance the learning of students and prepare them to apply their knowledge or expertise in different/unfamiliar contexts, also known as transfer of learning. As stated by Kulagesaram and Rangachari (2018), “teachers in physiology and the sciences should view assessments not only as moments to gauge the knowledge base of their students, but as opportunities to engage them in meaningful learning for uncertain futures”.
2- Steps of the project implementation
- Format of the exam
The exam that I designed mainly used tables and figures either presented during the lectures or extracted from relevant scientific papers that the students had to interpret based on their knowledge.
It resulted in a 12-page exam with 23 individual questions, including 2 bonus questions (the last two questions). The wording of the questions was in some cases much longer than the expected answers because it included all the information necessary to give a short but to the point answer. The length of the exam was also due to the space occupied by the many figures and tables. I knew that the students would not be happy about the length of the exam but I decided against downsizing it as a way of testing their ability to manage their time, read instructions carefully and give shorter but more narrowed down answers.
The exam was written in English but the candidates were free to answer in Norwegian/a Scandinavian language. Out of 6 Norwegian speaking students, only one chose to answer in Norwegian.
- Preparation for the exam
In concertation with the students, I organized a seminar 3 weeks before the exam (which was also 3 weeks after the last lecture) which served as a preparation for the exam. At the time of the seminar, my exam was ready so I could already communicate some information to the students about the format, the length and the expectations. In addition to warning them about the unusual length of the exam, I clearly stated that the most relevant answers would be short but would require to read the questions/instructions carefully.
- Evaluation of the exam
The exam grading took into account the extensive length of the exam. The original grading scale was re-adjusted based on the students’ answers (for example, where most/all students failed at providing a correct answer, the points were lowered because it might have indicated confusing instructions). I communicated the final grading scale to the external sensor, along with the range of sample answers. The last two bonus questions provided extra points. Before submitting the grades to the administration, I met with the external sensor to compare our marks and we agreed on the final grades (although our grades per question were sometimes different, the final marks were similar).
With a 100% attendance and no student failing, the average grade for the class was C, which I found good. More specifically, the grades were distributed as follows:

3- The challenges
The main challenge for achieving my goal to design an assessment for transfer of learning has been that I am not the only lecturer in the course and it is sometimes difficult to evaluate what were the specific outcomes of the other lectures. I could only base the exam on the expected learning outcomes of the different lectures. I did send the exam to the different lecturers to have their opinion on the adequacy of my questions to their lectures but not all of them took the time to give me feedback and trusted my judgment.
Another challenge has been that when I had to choose a development project, I was already done teaching all my lectures. This might have created some misalignment between the content of some lectures and the final exam.
I tried to address these weaknesses during the preparation seminar but most students had clearly not started preparing for the exam yet and therefore did not fully benefit from it. The students were mainly concerned about whether yes or no they should learn by heart these or these concepts/formulas. As expected, they had a pretty scholar approach to the exam and did not seem to mind so much about the long-term learning concept. Students in general and bachelor students in particular have a much shorter-term perspective, which pretty much consists of getting an acceptable grade at the exam.
4- Feedback from the students
During the preparation seminar (100% attendance), I informed the students that I would personally send them a questionnaire right after the exam in order to give their opinion about the course and the exam. I stressed the importance of providing a feedback in order to 1/ freely express their opinion, 2/ improve the course, and 3/ help me get a better sense of their expectations.
I prepared the questionnaire using Nettskjema and aimed at making it concise and relevant to both the students and me. I divided it into 5 main parts (1/ general evaluation of the course; 2/ evaluation of the lectures; 3/ evaluation of the colloquiums; 4/ evaluation of the written exam; 5/ general questions) with a total of 21 multiple choice questions. For each section, the students could also leave comments and/or raised points not covered by my questions (the comment section was optional).
In addition to the expected feedback from the poll, I also voluntarily took part in the exam supervision to answer potential questions (only one student asked me about a typo) and to observe how students reacted to the exam. Despite my best efforts at stressing the expectations of the exam, I suspected that the students got overwhelmed during the exam. My suspicions were later confirmed by the feedback they provided through the poll although only 6 out of 9 students answered. Here you can read the results of the poll. Despite the normal distribution of grades, the students did not generally appreciate the format of the exam. As expected, all the students complained about the length of the exam. It was interesting to observe that most students (83.3%) estimated their workload as high but yet only few of them (16.7%) judged their performance as very good. The section about the written exam is where the students wrote the most comments. I think that this was motivated by the fear of getting a bad grade but also because they received the questionnaire right after the exam so it was still very fresh and they needed to express their frustration. Kulagesaram and Rangachari (2018) observed that “students often tell their teachers that they have worked hard at their learning. The sad part is that they have not learned smartly, so what they have studied does not lend itself to the demands of the assessment.”
5- Feedback from my peers
- Feedback from the external sensor
I have asked the external sensor (a researcher working in the same field and who has been working as a lecturer in my department) to give me feedback about the exam. She found that the format of the exam provided a good mean of testing the transfer of learning and how much they understood from the curriculum. However, she also acknowledged that the exam was too long which made it harder for the students to write in-depth answers.
- Feedback from colleagues at my department
On April 06, 2018, together with my colleague Jasmine Nahrgang who is also taking the pedagogy competency course, we invited a couple of relevant colleagues for a lunch seminar in order to present our development projects and get feedback from them. I found it very constructive to share my experience with them. They also agreed that the exam was probably too long but they liked the idea of the format and the intention. During our exchange on how to prepare the students for the exam, they suggested two forms of formative assessments. They first suggested to use the same Kahoot before and after each lecture so that the students would pay more attention to the unfamiliar concepts during the lecture. They also made suggestions for group activities during the preparation for the exam seminar. The idea would be to provide small groups of students with sample questions that they would answer collectively in order to initiate discussions and/or debate within these groups. A debriefing for the whole class would follow these group activities.
In addition to receiving encouragements and constructive comments, it did open a new way within the department and was a good demonstration of how to benefit from sharing our mutual teaching experiences in a friendly environment.
6- Lessons learned and issues to be addressed
Based on this experience, I will definitively maintain the format of the exam but I will make it much shorter. In addition, in order to ensure that the assessment is well aligned with the stated objectives, the following issues will be addressed:
- I have already restructured the content of the course for next semester. The course will now include 30 hours of lectures and 20 hours of seminars. The seminars will give the students the opportunity to reflect on what they were taught during the lectures and will simultaneously familiarize them with scientific skills required to succeed the exam (e.g. how to interpret a figure, how to summarize results …). The transfer of learning will mainly happen through the formative assessments provided during the seminars. I think that providing more feedback on their learning through the seminars will be especially beneficial for the bachelor students who need a lot a reassurance.
- I also wrote more specific objectives for the course following the Bloom’s taxanomy (Anderson and Krathwohl, 2001) so that the assessment aligns well with the stated objectives.
- From next semester, I will teach most of the lectures and seminars myself (60% of the lectures and 80% of the seminars) and the rest will be taught by close UiT colleagues. It will definitively help to have a better control over the whole course and follow the acquisition of the students. In addition, I will also dedicate more time working on specific learning outcomes at the beginning of each lecture and seminar.
- Finally, in order to reinforce learning, I will offer an exam feedback in the form of a seminar providing generic qualitative comments about questions where students performed well and those where they performed weakly. As pointed out by Kvale (2007), “a common absence of feedback beyond a grade indicates a lack of reinforcement which may foster an attitude of futility of learning, conveying to the students an understanding of learning as something which is merely done for passing exams and obtaining good grades”.
Literature cited
Anderson, L. W. and Krathwohl, D. R., et al (Eds..) (2001). A Taxonomy for Learning, Teaching, and Assessing: A Revision of Bloom’s Taxonomy of Educational Objectives. Allyn & Bacon. Boston, MA (Pearson Education Group.
Kulagesaram, K. and Rangachari, P.K. (2018). Beyond “formative”: assessments to enrich student learning. Advances in Physiology Education, 42, 5-14.
Kvale, S. (2007). Contradiction of assessment for learning in higher education institutions, in D. Boud and N. Falchikov (Eds) (2007), Rethinking Assessment in Higher Education: Learning for the Longer Term, Routledge p.65.