In the past decade or so, universities have jumped with both feet into learning analytics. This might be used to identify individual students in need of support, prompt changes in curriculum, evaluate teaching quality or some combination of all these.
Some institutions have invested in sophisticated platforms, while others have a collection of home-grown solutions. It may be that analytics is still a work in progress at your institution or perhaps other teams at your institution use the analytics platform. So, what can you do if you don’t have access to a suite of learning analytics tools?
You might find yourself DIY-ing your own versions, scrambling through spreadsheets and tracking interactions to see if what you’re changing about your practice is changing the outcome for students.
- University-edtech collaboration: how to leverage the best of both worlds
- Understanding student learning – what can human behaviour analytics tell us?
- Use technology to hook students’ interest when teaching online
Let’s leave the sophisticated platforms to the whole-of-institution level and talk about how you can use small-scale data analysis to improve your own practice and help your students. So, what kinds of situations are we talking about here?
Sometimes you have a specific one-off question. Is it better if I do things this way or that way?
For example, several years ago I was teaching a service unit (a large unit with students from many programmes) with an assessment task that used peer feedback. I wasn’t sure whether I should match students up to get feedback from other students in the same discipline or if they’d get more useful feedback from someone with a different perspective. So, I split the cohort, tried it both ways and looked at their results and feedback from students on the task itself. The whole process of setting it up and looking at the data afterwards took a few hours before I was reasonably confident I knew which approach was best.
In other situations, you want to keep on track over time. For example, when I was overseeing a self-paced graduate programme with lots of electives for which we ran weekly online tutorials, we wanted to track the tutorials week to week to help us keep them useful to the students and improve them over time.
So, we tracked a range of measures – attendance, student questions, post-session feedback, feedback from guest speakers on student engagement during the session, etc. Every so often we’d have a standout session or one that didn’t work, and the data helped us work out why and show that over time students were engaging more with these sessions even though they were an optional part of the programme.
Below, I’ve included some useful things to think about when starting your own DIY learning analytics project:
1. Start with a problem or question
“If I change this, will it help students?” Questions like this are the gateway drug for analytics. You may have one in mind already or perhaps you have a sense of: “This part of my programme isn’t working as well as I’d like” or even: “This part is working really well, but I don’t know why so I can’t replicate that magic.”
2. Go exploring
I think of this stage as spelunking for useful data sources. Trawl through the options menus or administrative settings of whatever technology you might be using to figure out what data you can get out of it. What data can you export from your learning management system? Does your lecture-capture software track views and interactions? Do your students scan their ID cards to get into the building where your class is? Before you home in on exactly what you want to measure, go exploring so you don’t miss useful information.
3. Track something tangible
Student engagement is an elusive thing that we can’t measure directly. We can see whether a student has logged in to the class webpage, but we can’t tell whether they’re excited by it or bored. Even students who are in the room with us can be icebergs – the flat expression on their face today might have more to do with something in their personal life than with you. You can, though, track whether they ask questions or revisit that section of a video again and again.
4. Use a consistent process to gather your data and avoid manual steps
If you’re exporting data from a system, does the point in time matter? Create a routine for it. If you’re recording attendance, record it directly into your spreadsheet rather than writing it down by hand so you don’t have to transfer it later.
5. Make friends with the systems team
The specific team will depend on how your institution is structured. Maybe the IT department looks after all your learning technologies; maybe there’s a centralised teaching and learning group that oversees this; maybe it’s the student admin team. Reach out to the people who look after the systems you’re using to see what kinds of data are available. They may be able to introduce you to features you didn’t know about, pull a custom export or even connect you with other people who are asking the same questions you are.
6. Avoid the temptation to turn it into a full research study
Sometimes this is valuable and useful, but the work involved in doing this can be a barrier and it’s better to do something small than stop because you can’t do something big. A small tracking exercise you start now might even lead to a much better research study down the track.
Finally, you don’t have to be the lecturer. Maybe you’re a tutor, lab technician, educational designer, learning technologist or administrative assistant? Does your work impact students? Perhaps you need to demonstrate your practice as part of your own career progression, or maybe you want to know whether one of your ideas worked or not. These same tricks apply even if you aren’t the academic in charge of a subject or programme.
Jennifer Lawrence is programme director for digital education at the University of New England.
If you found this interesting and want advice and insight from academics and university staff delivered direct to your inbox each week, sign up for the THE Campus newsletter.
comment