12 Strengths and Limitations

In seeking to develop a model for institutional learning outcomes assessment, we aspired to reflect the dual goals of:

  1. faculty engagement in ongoing reflection and curriculum enhancement; and,
  2. measurement of student achievement of learning outcomes via a broadly scalable and generalizable process, regardless of program discipline.

The first two iterations of SAIL presented in this Handbook successfully achieved the first goal of engaging faculty in ongoing reflection that led to curriculum enhancement. However, we identified several challenges with our chosen approach and advise that SAIL (in its current form) is not broadly scalable across a diversity of disciplines due to its resource-intensive design and use of a standardized rubric.

We discovered that SAIL is an ideal method for immersive faculty learning and development. We saw evidence that the ILO Pods persist as peer support networks, even after the pilots ended.

Below is a summary of the strengths and limitations of the SAIL pilots, as well as a brief discussion on future pilot possibilities.


SAIL methodology has several strengths, including opportunities for immersive professional development, collective learning, and meaningful dialogue about student success. Specifically, SAIL:

  • is aligned with the principles for learning outcomes and assessment;
  • provided deep faculty engagement in structured review through educational development;
  • led to actionable outcomes with implications for course and assignment redesign;
  • the ILO Pods persisted as peer support networks even after the pilot ended;
  • the measurement of student achievement of institutional learning outcomes are based on multiple courses and representative of multiple disciplines; and,
  • resulted in faculty knowledge mobilization related to the Scholarship of Teaching and Learning; including, a presentation at the 2021 IUPUI Assessment Institute and multiple publications thus contributing to the field of educational research.

Evidence gathered from the faculty debrief affirms that the SAIL process is aligned with the principles for learning outcomes and assessment. Research suggests that anchoring the process with principles rather than regulatory requirements can reinforce institutional values and lead to a stronger culture of continuous quality improvement (Wall et al., 2021).

Kinzie et al. (2019) called to attention the central role that faculty professional development has played in advancing learning outcomes assessment, and emphasized “the power of constructive, evidence-informed exchange among faculty… to align and improve student learning in general education and the disciplines” (p. 53). The structured and facilitated approach that comprises SAIL provides for this kind of deep faculty engagement that fosters evidence-based discussions about how to improve student learning.


SAIL methodology has several limitations that stem from its resource-intensive and immersive design, as well as contextual factors such as the disciplinary diversity and range of course levels included in the pilot. Specifically, SAIL:

  • is resource and time intensive in part due to the current manual-approach to gathering artifacts, distributing artifacts, and compiling reports – additional tools for collection and distribution should be explored;
  • assessment can be challenging when assessing assignments outside one’s discipline, particularly when the assignment is not completed in English (e.g., in a Japanese language course) or is highly technical (e.g., in a computing science course), or where the epistemology and is not captured in the assignment description;
  • assumes that one course assignment can sufficiently provide opportunities for students to demonstrate achievement of an ILO; and,
  • posed challenges for drawing conclusions at the course, program, and institutional level due to the variability in assessor ratings and range of course levels included without additional training and focus on inter-rater reliability.

Each SAIL pilot demands approximately 35 hours of participating faculty members’ time, and the support of an educational developers and quality assurance practitioners. Several factors contribute to the resource-intensive nature of SAIL. First, we provide structured opportunities for faculty to collaboratively engage in learning outcomes assessment and we embed educational development into the process. Second, rubric-based assessment can be labour intensive, particularly when assessors must interpret assignments outside their discipline.

The pilots included a range of course levels from first through fourth year, as such we would expect to see a range of skills across course assignments within an ILO Pod. To adequately capture this range when evaluating across years we must look for appropriate progression, not solely student achievement.

Should a third Research Question be introduced to investigate student progression as students develop competency in the ILOs during their degree?

If we seek to investigate student achievement alone then considerations should be given to modifying SAIL to include capstone courses or final year courses as the defining student artifact (as was piloted in the third iteration). Banta et al. (2009) suggest that capstone courses that include a portfolio are a better reflection of student learning over time as opposed to the snapshot provided in a single course assignment; however, this approach is also a resource intensive endeavour.

Read our commentary on “Cautionary Considerations for producing Aggregate Reports” to see suggestions for improving inter-rater reliability and the utility of the course-specific reports.


Banta, T. W., Jones, E. A., & Black, K. E. (2009). Designing effective assessment: Principles and profiles of good practice. Jossey-Bass.

Austin, L. Dishke Hondzel, D., Dumouchel, E., Hoare, A., Hoessler, C., Kondrashov, O., McDonald, B., Noakes, J., & Reid, R. (2021, July 29). SAILing Forth! Faculty-Led Assessment of Institutional Learning Outcomes [Conference Presentation]. 2021 Assessment Institute, IUPUI, https://iu.mediaspace.kaltura.com/media/t/1_p4tvbjjv

Hoessler, C., Hoare, A., Austin, E., Dhiman, H., Gibson, S., Huscroft, C., McKay, L., McDonald, B., Noakes, J., & Reid, R. (2024). Strategic assessment of institutional learning outcomes: A faculty-led community of practice approach. Journal of Formative Design in Learning, 7, 171-181, https://link.springer.com/article/10.1007/s41686-023-00084-6

Kinzie, J., Landy, K, Sorcinelli, M. D., & Hutchings, P. (2019). Better together: How faculty development and assessment can join forces to improve student learning. Change: The Magazine of Higher Learning, 51(5), 46-54.

Wall, S., Evans, L. M., & Swentzell, P. (2021). Indigenous assessment: Cultural relevancy in assessment of student learning. In J. M. Souza & T. A. Rose (Eds.), Exemplars of assessment in higher education. Association for the Assessment of Learning in Higher Education (AALHE).


Icon for the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License

Strategic Assessment of Institutional Learning Copyright © by Carolyn Hoessler and Alana Hoare is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License, except where otherwise noted.

Share This Book