" In the absence of a Bretton Woods-type standard, there is no common definition for what an active learner is, or a definition for a learner who has completed the course. The diversity of objectives, needs and content makes such a definition impossible and irrelevant."
Written by Antoine Amiel, CEO, LearnAssembly
“Digital learning does not work; the completion rate is dismally low.” “Everyone knows that the completion rate of MOOCs is between 5% and 10%.”
We’ve all heard statements like this before. Yet, the subject of retention and completion is more complex than it appears at a glance. The thinking behind this negative observation is full of biases, beliefs and approximations. Is it time to rethink how to evaluate the success of digital training?
The completion rate is like a patient who has overused cosmetic surgery; depending on the needs, they could be treated with Botox, liposuction, or a nose job, etc. The concept of “completion rate” and “active user” are metrics; and like any metric, they are manufactured. These rates are arbitrarily defined by platforms and content publishers. There is no universally recognized standard; no equivalent of the IFRS in accounting, for example. In the absence of a Bretton Woods-type standard, there is no common definition for what an active learner is, or a definition for a learner who has completed the course. The diversity of objectives, needs and content makes such a definition impossible and irrelevant. As illustrated by Joffre Dumazedier and Philippe Carré through their work analyzing different independent learning practices, completion rate is not relevant for all “personas.”
Thus, Edx considers an active user as an individual who has made it into the second week of an educational activity within a MOOC. Other MOOC platforms define this as a user who has logged in at least once (not the most ambitious metric, admittedly). As for completion rate, the lines are also blurred. For some, this represents the ratio between the number of people who obtained the certificate versus the number of sign-ups. Yet, for others, it’s the number of people who viewed all videos out of the total number of sign-ups. On certain platforms, passing a final quiz is enough to be considered as a learner who has completed the course. The study by the University of Louvain on the completion of MOOCs was of great interest. It outlined the distinction between active retention and passive retention, with passive retention being the ratio of registered vs. certified, and the active retention rate being the ratio of active vs. certified.
It’s clear that completion rates can be “hacked” when advantageous and there are no measures to stop such practices.
In our view, reaching a consensus around an effective metric represents a real challenge. Even if the goal were achieved, access to raw material still remains quite difficult. Lack of access to data hobbles the credibility of such analyses. Private online course publishers keep data to themselves. They collaborate with researchers on a confidential basis or are equipped with their own in-house teams of analysts. The “ Coursera Engineering ” blog—which is devoted to the use of data at Coursera—thus represents a gold mine. Such data eclipses articles or university studies that have been explored to death, falling painfully short of staying up-to-date or state-of-the-art, yet which continue to be cited endlessly by lazy journalists, well as researchers without the most scrupulous standards of diligence.
The final pitfall when exploring this issue is the quality of data collected by the LMS. At LearnAssembly, we have worked with around twenty different LMS throughout our various projects, ranging from MOOC platforms to large LMS solutions on the market that are designed to equip larger companies through corporate social networks or LEPs. Findings appear nearly the same everywhere: the quality of the data collected represents averages only and does not allow studying the learning strategies of learners in detail, with the exception of certain MOOC platforms.
As the risk management director at a large company hoping to carry out a digital training course on the Sapin II law once told me during a phone call: “I simply need to demonstrate that employees in my B.U. have fulfilled e-learning requirements on the subject, which is an obligation in Europe. Everyone has to go through two hours of training on the subject, and that’s all.” The completion rate is too often a distorted metric. It serves either as a public demonstration—in the case of mandatory compliance—or as an academic knowledge assessment tool. But it’s far more rare to find the focus fixed on skills or practical applications.
These cases of completion rate do not reflect the needs of the learners. Pushing people to follow courses to the very end, even through force-feeding content and harassment by email, all in an effort to ensure they come back, is a perfect way leave digital learning participants disgusted with the process.
In the same way, forcing participants to take academic courses week after week while the majority of them are unable to clearly define their true learning objectives or to actually navigate an overabundance of content and find the specific parts of a course that they can use, are strategies doomed to failure. Personally, I sometimes sign up for four or five MOOCs at the same time. Unsure of my needs, I will cherry-pick resources from each of the MOOCs before narrowing down to one as soon as I have refined my ultimate needs. That makes me one of the very people who corrupt the completion rates of MOOCs, even while I am a staunch defender and an ardent user. By twisting the intended use of MOOCs to define my own personal learning strategy, I am in fact not calling into question the MOOCs themselves, but the concept of completion rate as a performance metric. Here is an excellent illustration, the study by HarvardX comparing the completion rates to the courses proposed, according to the stated objectives of the learners. It is quite illuminating to see the decisive role that motivation can play, and how this motivation can shift in one direction or the other during the course of the training.
The completion rate’s time has come and gone. We propose replacing the current completion rate with two other metrics already in full force but not always fully recognized:
1) the engagement rate, which is already widely used in the corporate learning market, while embracing the Don Quixote-inspired quest that is the completion rate. The engagement rate highlights the interaction between the learner and the content. Analyzing this interaction provides insight into learning strategies, the maturity of the learners, where they stand in relation to particular subjects. This form is rich in potential meanings and interpretations, but inherently more uncertain and can lean towards the superficial.
2) a real impact study would be the second substitution option, one that has been applied for years in other fields, in particular medical, or in public policy. In the case of digital learning, impact studies would be a metric combining a follow-up further down the line in terms of observable habits by an HRM or another manager, the acquisition of traditional skills, a 360 evaluation and HR data from the annual performance review, and data from LMS.
Some further reading on the subject: