A Rubric for Evaluating E-Learning.

As educational developers supporting the incorporation of technology into teaching, we are often asked by instructors for a tailored recommendation of an e-learning tool to use in a particular course. When they use the phrase e-learning tool, instructors are typically asking for some kind of digital technology, mediated through the use of an internet-connected device, that is designed to support student learning. Such requests tend to be accompanied by statements of frustration over the selection process they’ve undertaken. These frustrations often result from two factors. First, instructors are typically experts in their course’s subject matter, yet they are not necessarily fluent in the best criteria for evaluating e-learning tools. Second, the number and the variety of e-learning tools continue to proliferate. Both of these factors make it increasingly challenging for faculty members to evaluate and select an e-learning tool that aligns with their course design and meaningfully supports their students’ learning experience.

Yet, we firmly believe that instructors should be the ultimate decision-makers in selecting the tools that will work for their courses and their learners. Thus, we saw an opportunity to develop a framework that would assist with the predictive evaluation of e-learning tools—a framework that could be used by non-tech experts and applied in a variety of learning contexts to help draw their attention to the cogent aspects of evaluating any e-learning tool. To address this need, we created the Rubric for E-Learning Tool Evaluation.

At our institution, Western University, the Rubric for E-Learning Tool Evaluation is currently being utilized in two ways. First, educational developers are using the rubric to review the tools and technologies profiled on the eLearning Toolkit, a university online resource intended to help instructors discover and meaningfully integrate technologies into their teaching. Second, we have shared our rubric with instructors and staff so that they can independently review tools of interest to them. These uses of the framework are key to our intended purpose for the rubric: to serve as a guide for instructors and staff in their assessment and selection of e-learning tools through a multidimensional evaluation of functional, technical, and pedagogical aspects.

Foundations of the Framework

In the 1980s, researchers began creating various models for choosing, adopting, and evaluating technology. Some of these models assessed readiness to adopt technology (be it by instructors, students, or institutions)—for example, the technology acceptance model (TAM) or its many variations. Other models aimed to measure technology integration into teaching or the output quality of specific e-learning software and platforms. Still other researchers combined models to support decision-making throughout the process of integrating technology into teaching, from initial curriculum design to the use of e-learning tools.

However, aside from the SECTIONS model,1 existing models fell short in two key areas:

  • They were not typically intended for ad hoc instructor use.
  • They did not enable critique of specific tools or technology for informing adoption by instructors.

To address this, we integrated, reorganized, and presented existing concepts using an instructor-based lens to create an evaluative, predictive model that lets instructors and support staff—including instructional designers and courseware developers—evaluate technologies for their appropriate fit to a course’s learning outcomes and classroom contexts.

Leave a Reply

Your email address will not be published. Required fields are marked *