11 Findings

Each SAIL Planning Cycle (Figure 1) incorporates opportunities for reflection, collaborative self-study, and debrief. The results of which inform improvements to subsequent SAIL Planning Cycles. Below is a description of the findings from the first two action research cycles, including strengths, challenges, and areas for improvement, as well as a description of how the findings informed subsequent iterations of SAIL.

Pilot #1: 2020-21

In 2020-21, faculty members teaching ILO-approved courses engaged as co-researchers. A total of twelve faculty members chose join the investigation and formed three ILO Pods: Social Responsibility, Lifelong Learning, and Critical Thinking and Investigation. Six disciplinary perspectives were represented in the first pilot: tourism management, sociology, education, cooperative education, social work, and communication and media. In addition, faculty from the disciplines of biology and nursing participated in the development of the rubric for Lifelong Learning to ensure a diversity of disciplines were reflected in the development of each rubric.  Table 6.1 details a summary of the findings from Pilot #1.

Table 6.1 Summary of Pilot #1 Findings
Findings
Strengths
  • Faculty valued the community of practice (“ILO Pod”) approach.
  • The peer-to-peer feedback was viewed as the greatest strength of SAIL.
  • Faculty enjoyed the collaborative, exploratory adventure with new colleagues.
  • SAIL fits well with the university’s collegial, teaching-focused culture.
  • The assessment ratings were perceived to be valuable with actionable results.
  • Faculty valued the interdisciplinary conversations and insights into different approaches to teaching an institutional learning outcome.
Challenges and Areas for Improvement
  • The pandemic and remote delivery of courses may have created barriers to trust-building between faculty and students resulting in a low student consent rate.
  • Too few students consented to draw conclusions about student achievement of an institutional learning outcome.
  • The MS Teams platform was cumbersome.
  • There was a need for earlier assignment selection and indication of which categories (rows) applied to the course assignment and which categories were not applicable.
  • Timelines for submitting assessor ratings impacted the assignment that was selected.
  • A standardized, institutional rubric may need to be more general, or require the ability to adapt to disciplinary needs.

Two overarching recommendations resulted from Pilot #1:

  1. Postsecondary institutions could benefit from creating faculty development opportunities around interdisciplinary ILO Pods or communities of practice, for each institutional learning outcomes with support from educational developers. ​Faculty members who teach ILO-approved courses should be encouraged to participate in the ILO Pods to foster peer-to-peer learning and support student learning.​
  2. Faculty participating in an ILO Pod can use an institution rubric, or adapted rubric, to measure student achievement of an ILO in an ILO-approved course. ​Using the institutional rubric, two faculty members will peer assess, compare ratings, and reflect on and act as appropriate regarding student learning. ​

Prior to implementing these recommendations, we determined that it was necessary to engage in additional action research cycles that addressed the challenges posed in previous cycles.

Pilot #2: 2021-2022

The second iteration of SAIL was piloted in 2021-22 and included two faculty-led ILO Pods aimed at assessing student achievement of Lifelong Learning and Social Responsibility during the Winter 2022 semester. Five disciplines were represented in the second study: social work, cooperative education, sociology, geography, and business. Several modifications were made during the second pilot as shown in Table 6.2.

Table 6.2 Adaptations to SAIL Planning Cycle
Pilot #1 Pilot #2
Student Consent Opt-in Opt-out
Student Artifacts (assignments) Anonymized Presented as submitted
Platform MS Teams Moodle
Assignment Types
Predominantly essays and student reflections A variety of assignment types including essays, presentations, video-recorded PowerPoint presentations, round table discussion notes, portfolios, and visual diagrams
Rubrics Developed and tested Refined and tested
Timelines December 2020 – May 2021 December 2021 – July 2022 (2 month extension)
Course Delivery Mode Remote (due to restrictions mandated by Public Health Officer) Blended, Face-to-Face (due to return-to-campus mandated by Public Health Officer and enforced by the university during the global pandemic.

Pilot #2 affirmed the value of the recommendations that resulted from Pilot #1. However, Pilot #2 identified the need to delineate between tracking student progression and achievement and consider the use of capstone course-embedded assignments as an additional measure, among other key concepts. Table 6.3 details a summary of the findings from Pilot #2.

Table 6.3 Summary of Pilot #2 Findings
Findings
Strengths
  • The institutional rubric can be used to assess multiple types of course assignments (e.g., essays, presentations, round table discussions, portfolios).
  • Faculty were inspired by their peers’ course assignments, which prompted them to consider alternative ways to assess student learning, as well as to challenge students to push their thinking further (e.g., how to move beyond strategy to action in relation to the Social Responsibility ILO).
  • Opt-opt consent process was easier and significantly increased the rate of student participation (i.e., from 14.6 to 98.9 percent).
  • The rubric and assessment ratings can be used as a pedagogical tool to demonstrate to students what they have collectively accomplished and areas for further growth.
  • The institutional rubrics can be adopted and adapted in any classroom, by any faculty member.
  • The protected time and space to think more deeply about an assignment design was invaluable.
  • Faculty found it an enjoyable learning experience and are eager to continue conversations with new colleagues.
  • The Assessor Training was a useful professional development opportunity and has positive implications beyond SAIL.
  • The change to Moodle with downloaded files was more user-friendly than MS Teams for the assessment stage. However, faculty experienced challenges downloading assignments where there were nested sub-folders.
  • Using course assignments is valuable because it is organic, even if it only captures part of a course.
Challenges and Areas for Improvement
  • There was a time lag between the Assessor Training and the assessment of student artifacts. It may be valuable to combine these components into a full-day session, with opportunities to trouble-shoot with the SAIL Coordinators.
  • The assessment results showed inconsistencies in how faculty were applying the rubric (i.e., concerns with inter-rater reliability).
  • Faculty craved more structured opportunities to discuss their peers’ assessment ratings, and the ability to discuss the patterns they saw when assessing students’ assignments.
  • Some faculty noted overlap in the descriptions and between different categories in the rubric. Consider adding a step at the end of the SAIL Planning Cycle to refine the rubric.
  • The pacing of steps in the SAIL Planning Cycle, particularly the delay between the assessor training, assessor rating, and debrief was too long.
  • It is difficult to assess student achievement of an ILO using one course assignment. It may be valuable to assess all course assignments using the rubric.
  • The course report could be improved by including visuals (e.g., bar graphs or heat mapping to colour-coded tables).
  • Consider tracking and assessing the impact of SAIL on course redesign and resulting student overcomes.
  • Faculty noted overlap and similarities between some of the criteria (foci) which made it difficult to distinguish between foci. To improve assessing and interpretation of the ratings, clarity between foci is needed.
  • Faculty perceived rare alignment issues between the outcome rubrics (foci tools) and the student assignment rubrics.
  • Parts of assignments (discussion forums) were excluded because they could not be transferred.
  • Real opportunities to grow the SAIL project that were highlighted focus on intentional alignment of assignments and ILOs, and greater connection with faculty with in-person sessions and debriefs. In particular, faculty craved the opportunity to return to the student work and discuss it with the assessors. It would help with thinking through how they might improve the assignment.

Considering the resource-intensive nature of SAIL, we advise that the ILO Pods occur as staggered offerings with two ILOs assessed per action research cycle. Additional findings from the pilots suggest that the use of a standardized institutional rubric for measuring achievement of an ILO within an ILO-approved course shows promise. However, limitations exist with a standardized approach; therefore, we emphasize the importance of providing SAIL as an immersive, voluntary professional development opportunity for faculty amongst a suite of educational development programs offered, and strongly caution against implementing SAIL as a required process.

License

Icon for the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License

Strategic Assessment of Institutional Learning Copyright © by Carolyn Hoessler and Alana Hoare is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License, except where otherwise noted.

Share This Book