"

2 My QM Journey

My personal progression and insights

Starting Point: Discovering QM

My introduction to Quality Matters (QM) began in 2022 when I was working on OntarioLearn courses. I needed to revamp existing courses or develop new ones that would meet QM standards, and it was my first time hearing about QM. I worked with different organizations and institutions across Ontario and noticed that universities and colleges each had their own quality assurance processes. These processes often resulted in different types of course design. Even though there are provincial guidelines for program-level quality, course-level quality assurance is not standardized across institutions. Some course designs were similar, but many placed greater emphasis on certain areas depending on program content and materials.

In Ontario, institutional quality assurance typically focuses on program-level review rather than course-level standardization (Ontario Universities Council on Quality Assurance, n.d.). Each university and college develops its own internal QA processes, such as the IQAP for universities and the CQAAP for colleges, under oversight bodies like the Quality Council or OCQAS. Because course-level design is largely left to institutions and sometimes individual departments, course designs often vary significantly from one institution to another (Ontario College Quality Assurance Service, n.d.). This variation contributed to the inconsistency I observed.

At that time, I relied mainly on Universal Design for Learning (UDL) to ensure key learning elements were addressed, with a strong focus on interactive design (CAST, 2018). However, UDL did not always help me understand how to improve the overall structure of a course beyond the interactive components. This made me curious about how a standardized framework could improve the learner experience.

UDL has always played an important role in my course design. It helps ensure that learning experiences are accessible, flexible, and engaging for diverse learners. Focusing on UDL alone does not mean a course is lacking in quality. However, I realized that UDL does not address all aspects of course design that QM evaluates, such as alignment, measurable objectives, or the overall structure of the learning pathway. This helped me see that UDL and QM are complementary. UDL supports how learners interact with the content, while QM ensures that the course is clearly organized, aligned, and supported by evidence-based design practices.

When I began working with QM Peer Reviewers, I often received Not Met marks, and it was frustrating at first. Gradually, that frustration became curiosity, and I wanted to learn more about the QM rubric. I discovered that it is a system widely recognized for promoting clarity, consistency, and research-based design in online learning.

Learning the QM Rubric

I enrolled in the Applying the QM Rubric (APPQMR) course with no prior knowledge of QM. At first, I wondered why courses were evaluated and scored in such a detailed and structured way. I also questioned why there were no partial results, since the rubric only uses Met or Not Met. The 85 percent rule was introduced during the training, and I began to understand how reviewers determine whether a standard is achieved.

The training also introduced me to key concepts such as alignment, Course Level Learning Outcomes (CLOs), and Module Level Learning Outcomes (MLOs). Although I had completed the OISE Adult Learning and Development program and learned about the ADDIE framework, including how to write measurable learning objectives and the basic concept of alignment, the true understanding of the relationship between CLOs and MLOs was still new to me. I knew they needed to align, but I was not yet sure how. It took time to understand how these outcomes connect and how their measurability supports the overall course structure.

These ideas helped me see how learning objectives, assessments, and activities work together to support learner success. While I did not have a clear understanding of this concept during the rubric training, it became much clearer later when I began completing the peer reviewer certification. Even so, the rubric itself served as an important turning point in my understanding of quality. It provided a foundation I could rely on, especially once I learned how to read the annotations for each standard. At first, I found the annotations difficult to interpret because they were long, detailed, and filled with many examples, but over time I realized how valuable they are for guiding course design.

Through this experience, I began to understand that quality is not about visual appearance or perfection. It is about intentional design choices that make learning clear, consistent, and accessible for all learners.

Becoming a Peer Reviewer

After completing the rubric training, I continued working on my course designs and aimed to meet the requirements for QM peer reviews. I began asking myself why I could not become a Peer Reviewer as well. I understood how the rubric worked, but I was not yet confident about how to evaluate other courses. This curiosity led me to pursue certification as a QM Peer Reviewer, and eventually I passed the role application process to become officially certified.

Typically, reviewers spend about eight to ten hours evaluating a course. Since it was my first review, I assumed it might take me twelve to fourteen hours. In reality, I spent about fifteen to eighteen hours completing my first review. I was also unsure about identifying Not Met standards, and my recommendations were not as strong or complete as they could have been. Even so, my first review was eye-opening. I saw how much thought goes into designing courses that meet QM standards and how collaborative the review process is. Reviewing other courses helped me understand practical applications of the rubric and gave me new ideas for supporting faculty in my own institution.

External Reviews and Broader Perspective

As I gained experience and became more confident, I began participating in external reviews for other institutions through QM. These institutions were located across North America, including Indiana University Northwest, the University of Southern Indiana, Polk State College, Truckee Meadows Community College, Missouri Baptist University, and the University of the District of Columbia. These reviews broadened my perspective on online learning quality. I observed differences between Canadian and U.S. practices and learned how institutions adapt QM standards to their unique contexts. This experience reinforced the importance of constructive feedback and collaboration in improving course design.

Achieving Master Reviewer Status

After completing several reviews, I received an invitation from QM indicating that I was eligible to take the Master Reviewer Certification. I was grateful that my institution supported my professional development, which allowed me to complete the certification. Eventually, I became a QM Certified Master Reviewer. This helped me better understand the responsibilities of the Team Chair, a role I observed while serving as a Peer Reviewer. It also prepared me to lead review teams and mentor other reviewers.

The course focused heavily on coaching and on how to write strong coaching recommendations. Peer Reviewers learn how to find evidence and write recommendations that are measurable, specific, balanced, constructive, and sensitive. The Master Reviewer role required me to apply these skills at a deeper level. This experience strengthened my understanding of quality assurance and improved my ability to guide faculty through the design process. It also gave me insight into how QM supports a culture of continuous improvement rather than perfection.

One unexpected benefit was the improvement in my writing skills. Learning to write recommendations that are measurable, specific, balanced, constructive, and sensitive also improved my everyday communication with colleagues. These skills help me communicate more effectively, avoid misunderstandings, and ensure that my emails and feedback are always supportive and beneficial to others.

Reflections and Implications

My QM journey has shaped my approach to online learning. As I continued to learn how QM standards were applied in real course reviews, one experience had a particularly strong impact on my development. Early on, I worked with a peer reviewer whose feedback often resulted in several Not Met standards. At the time, I felt confused because I believed I had addressed many of the required elements. I did not fully understand how the reviewer was applying the 85 percent rule, and I struggled to interpret the recommendations that were provided.

Later, during my certification training, I learned that reviewers may approach the 85 percent threshold differently, and that even when some elements need improvement, a standard may still be considered Met if the essential requirements are largely present. This helped me look back on that early experience with a new perspective. I realized that my difficulty was not about the reviewer, but about my own need to better understand the rubric, the annotations, and the purpose of constructive recommendations.

Before becoming a Peer Reviewer, completing only the QM Rubric course still left gaps in my understanding because I had not yet used the Course Review Management System (CRMS) on the QM site. I had not seen how the standards, annotations, and evidence entry fields appeared in an actual review. I am grateful that my institution supported my professional development and covered the costs for the additional certifications. Completing all three certifications allowed me not only to understand the process and framework more deeply, but also to help others with greater confidence. It also strengthened my instructional design skills, particularly in course design and the review process. I developed a stronger understanding of alignment among Course Level Outcomes, Module Level Outcomes, assessments, instructional materials, and learning activities.

That experience played an important role in shaping my approach as both a course designer and a reviewer. It taught me how essential clear, specific, and balanced recommendations are for helping faculty improve their courses. It also enhanced my commitment to providing feedback that is supportive, measurable, and sensitive to the needs of instructors. In many ways, that challenging moment became a valuable part of my learning, guiding how I communicate and collaborate with others today.