According to a recent piece in the , Stanford”s Lytics Lab is gleaning learning analytics data from MOOCs to understand how people learn. This fascinating line of research offers insight into what turns students off or engages them, how men might learn differently from women, how online forums can support better learning performance, and to what extent mentoring or tutoring other MOOC-takers can help the helper.

The Lytics Lab is a multi-disciplinary collaborative effort that focuses on “use-driven research and data-driven design” to enhance the effectiveness of online learning environments and expand our knowledge about how they function. As an early adopter and leader in online course delivery, Stanford”s renowned MOOCs are “where it”s at” for conducting this research on a large scale and then applying the results.

Stanford”s Learning Analytics group of grad students, professors, researchers and visiting thought leaders, representing disciplines from education to computer science to sociology, meets weekly in “the data cauldron,” to brainstorm. Research projects include development of a dashboard to help MOOC instructors monitor student engagement; a study of peer assessment in MOOCs based on 63,000 peer grades in a single MOOC (wow!); research into what data points gathered during a MOOC can help predict student performance outcomes; and ways to provide automated feedback on computer programming assignments.

Across Stanford MOOCs, data are collected when students take tests, hand in homework, participate in forums, watch videos and do peer grading. The more data is collected and the more analytics are applied to it, the more refined and expansive our understanding of online learning dynamics becomes.

One of the key reasons for studying MOOCs, of course, is to help make them more effective. Why is the dropout rate so great in MOOCs? What motivates people to take them to begin with? Why are some elements of the current MOOC format popular, while others are much less so?

In one study entitled ” Analyzing Learner Subpopulations in MOOCS,” three Stanford doctoral students questioned the questions themselves. For example, they discovered that learners take or leave MOOCs for different reasons; so referring to “dropouts” generically is not relevant. Similarly, some learners take a MOOC for certification or skills acquisition, while others are participating for more casual reasons, like intellectual stimulation or enjoyment. So “completion” is likewise not a generically applicable criterion.

Of course, Stanford”s academic community is not the only group looking to apply learning analytics to MOOCs. In this slideshare on “Emerging and Potential Learning Analytics from MOOCs,” presented at the London Knowledge Lab in March 2013, Katy Jordan makes a wide range of observations about what factors may contribute to higher or lower completion rates. But Ms. Jordan stresses that more data is needed and that the questions themselves need to be questioned. Is completion of a MOOC even a valid way to judge achievement or measure learning success?

For more information on how learning analytics and MOOCs are influencing each other:

What do you think are the key questions to ask about the effectiveness of MOOCs and how to measure them?

Featured image courtesy of Cikgu Brian.

SAT vs ACT: Choosing the Right Test [NEW EBOOK]

Download this free 20-Page Ebook for Tutors Now!

Our free 20-page ebook is a step-by-step guide on how to select the right test for your student. Learn everything you need to know about using the PLAN and PSAT to improve student scores, how to leverage learning analytics to select one test over the other, and other tips on how to take the guesswork out of selecting the ACT vs the SAT.

Tagged with →