3 Student Consent and Artifacts
SAIL explores the extent to which students’ achievement of institutional learning outcomes can be assessed through evaluation of course-embedded assignments. Each faculty member participating in SAIL identifies a relevant assignment from an ILO-approved course that they teach. This use of direct (e.g., Allen, 2008) and authentic task assessments and rubrics reflects established practices in the United States (e.g., NILOA, 2016; Nunley et al., 2011), though is still relatively uncommon in Canadian contexts outside HECQO and OCAV funded projects in one Canadian province (e.g., Simpler et al., 2018).
Thompson Rivers University’s eight ILOs are embedded in select three-credit academic courses, as well as fourth year capstone courses.
Courses are identified as meeting an ILO based on a substantive alignment between the ILO foci and the course description and course learning outcomes. As described under Shared Rubrics (Table 2.2). Foci Tools are used to help faculty determine whether a course meets the requirements for an ILO.
Each bachelor’s degree program has a program curriculum map that includes the ILO courses, a high impact practice course, and a capstone course, as part of the university’s general education model. The process of curriculum mapping provides a visual representation of the program curriculum, including how courses contribute to students’ learning, and facilitates course assignment and assessment design to support achievement of program and institutional learning outcomes. Therefore, course-embedded assignments are a logical source of data for assessing student achievement of ILOs.
Course-Embedded Assignment Selection
In each ILO Pod, faculty identified a relevant course-embedded assignment where students were likely to demonstrate the ILO rubric criteria. A variety of assignments were chosen including: written, round table discussion notes, video-recorded presentations using PowerPoint, posters, and visual diagrams.
To allow for adequate assessment of an ILO, it is pivotal that the assignment includes sufficient opportunities for students to demonstrate the rubric criteria. In some courses, demonstration of ILO criteria occurred across multiple assignments or assignment components. There were practical limits on the number of assignments that could be assessed; therefore, we relied on each ILO Pod to collectively decide assignment selection and interpretation of results.
To retain ethical integrity of the SAIL research project and align with the six principles for learning outcomes and assessment (specifically, equitable and learner-centered) it was important to ensure informed consent from the study participants – the students. All efforts were made to avoid coercion of students, and protect them from any potential harm that may result from participating in the pilot.
The SAIL Coordinators consulted with Student Caucus, privacy, ethics, and the faculty co-investigators to collaboratively design the consent request. We also sought the university’s Research Ethics Board approval.
The results of SAIL are intended to inform strategic planning at the institutional, program, and course level and it is our hope that future students will benefit from the impact of SAIL.
Opt-in or Opt-out Student Consent
Within legislative and local privacy and ethical guidelines, students assignments can be accessed and assessed by faculty and institutions for program improvement via either an opt-out or opt-in model. Both approaches were trialed during the SAIL pilots with an opt-in during Pilot #1 and opt-out during Pilot #2.
Following consultations with student representatives, departments, and privacy and ethics offices, and based on our experience during Pilot #2, we highly recommend the opt-out approach for anyone considering implementing a SAIL initiative.
In 2020-21, we piloted an opt-in consent process. Students enrolled in participating courses were invited to voluntarily consent to have one of their course assignments assessed by two faculty members who were not their course instructor. Student consent was sought within an ethics and privacy reviewed protocol to collect, anonymize, and assess one course assignment for the pilot project.
The opt-in survey was announced in students’ online learning management system (Moodle) and during class. SAIL Coordinators provided a brief presentation, upon request, in some of the classes. The 2021 Student Consent Form, which was distributed online through Survey Monkey, is available here: 2021 Student Consent Survey.
Notably, as courses were taught online during Pilot #1, the option of paper permission slips, which have had higher rates of submission and consent at another institution, was unavailable.
The SAIL Coordinators anonymized the student assignments by removing any identifying information (i.e., gender identity, sexual orientation, racial/ethnic identity, socioeconomic status, citizenship, place of birth, etc.) and replaced it with categories in square brackets (e.g., [gender identity] or [sexual orientation]). In addition, the students’ instructors did not know who consented.
Consent Rate: The overall student consent rate was 14.6 percent (46 out of 316 enrolled students). Response rates ranged from 2.4 to 50 percent across the participating courses. Given the low consent rate, we were not able to draw conclusions about the degree of student achievement of an institutional learning outcome. Instead, we focused our attention on the efficacy of the SAIL process, particularly the community of practice approach.
In 2021-22, we piloted an opt-out consent process following consultation with key stakeholders and an amendment to the REB proposal. The opt-out process involved the inclusion of a collection notice in the course syllabus, as well as verbal notice from the course instructor and/or SAIL Coordinators, and an announcement in Moodle. A sample data collection notice is available here: Data Collection Notice (PDF)
To ensure tracking of opt-outs, we created a SAIL email address. The use of a shared SAIL email address also allows for record retention across time and across SAIL Coordinators.
In addition, we sought and received REB-approval to remove the requirement to anonymize course assignments. This change was undertaken so that we could expand the type of course assignments for inclusion in the project based on feedback from Pilot #1. Specifically, removing the requirement to anonymize assignments allowed us to include video-recordings of student presentations. As a result, assessors had access to the course assignments as submitted and thus had access to student names, and other identifying information provided by the student in the assignment. However, students were not identified in any level of reporting (see Institutional Consultation and Reporting) and their course instructor did not know who consented.
Consent Rate: The overall student consent rate was 98.9 percent (196 out of 198 enrolled students). This highly representative sample allowed for a random subset to be selected and assessed in all courses with over 10 students with reasonable confidence that students were well-represented.
Random Selection from Consented Artifacts
In all courses with over 10 students, a random selection of students’ artifacts were assessed. In courses with fewer than 10 students or in the case of Pilot #1 where fewer than 10 students opted-in, all available student assignments were included in the assessment stage.
The use of random selection of students is intended to reduce sampling bias and produce a generalizable sample and reasonable workload for faculty. General trends would be discernible for the cohort based on the random sample as is used for the AAC&U VALUE Rubrics (e.g., Turbow & Evener, 2016).
Note that inter-rater reliability was not computed for Pilot #1 due to sample size and for Pilot #2 due to qualitative feedback strongly suggesting limitations in the ratings. Future iterations of SAIL are anticipated to consider this limitation. See the section on Institutional Consultation and Reporting for more information regarding inter-rater reliability and potential solutions for producing meaningful and educative aggregate reports of student achievement of institutional learning outcomes.
Association of American Colleges and Universities. (2009). Valid Assessment of Learning in Undergraduate Education (VALUE). AAC&U. https://www.aacu.org/initiatives/value
Turbow & Evener (2016). Norming a VALUE rubric to assess graduate information literacy. Journal of the Medical Library Association, 104(3), 209–214. https://pubmed.ncbi.nlm.nih.gov/27366121/