We shouldn’t confuse online engagement with logging in

By dene.mullen, 23 July, 2021
View
When assessing the all-important ‘engagement’ metric, the sector often defaults to the crude measurement of attendance − which is clearly flawed, says Chris Headleand
Article type
Article
Main text

In physical spaces, educators can often pick up on how engaged their students are in their learning. Understanding this quality of “student motivation” is intuitively simple to recognise but challenging to measure or formally report.

In the classroom, we can look into the faces of the people we’re teaching and assess how closely they’re paying attention. Targeting questions or engaging individuals in dialogue can further help to gauge the attentiveness of your learners. Of course, doing this requires being able to see the people you’re teaching.

As has been widely discussed, many students learning online will not want to turn their cameras on, for many justifiable reasons. It would also be a mistake to assume that all students own a webcam.

But let us assume for the moment that every student owned a camera and was willing to turn it on; would this actually help? The simple answer is: unlikely.

First, with a large class, most videoconferencing software will show only a subset of learners on the screen at any one time. These “visible learners” will also typically be the students who are already interacting proactively, as they are prioritised on the visual interface. Second, being able to see someone’s face in an online space doesn’t inherently mean that they are learning effectively. Put simply, a webcam’s viewpoint doesn’t produce the same bandwidth of information that we receive in face-to-face interactions. Slow internet speeds, pixelation or poor-quality cameras can make it harder to read a face.

 

At the extreme end of the spectrum, I recently demonstrated to colleagues how easy it was to loop a webcam feed − replacing my live feed with a pre-recorded loop of my face without anyone noticing. While this may sound overly Mission: Impossible, it also only takes two minutes to set up and works with almost all videoconferencing software. I’m not suggesting that anyone would actually bother doing this; it’s simply an extreme example to highlight that webcams are no panacea when it comes to monitoring attention.

Another problem is how engagement is typically measured in education. This is challenging even in the physical space. There isn’t a simple quantitative measure for the room’s mood (although people are investigating using emotion recognition to do precisely that). As such, the sector and related literature often default to the crude measurement of attendance − which is clearly flawed. The simple fact that someone has attended does not inherently indicate that they have engaged. 

Consider two students: Sam and Chris. Chris attends the lecture, signs the register, walks to the back of the room and has a nap, sleeping through all the content. Sam is unable to attend the class but spends the day at home, reading the material in detail, then watching the lecture recording and making comprehensive notes. By using attendance as our measure of commitment, motivation or participation, Chris will be recorded as engaged; Sam will not.

We often see this across the literature, with improvements in attendance used to measure the success of an engagement intervention. But this is like trying to calculate your productivity by counting the number of meetings in your diary. The two things may be related, but they aren’t inherently causal.

Online, this becomes even more challenging. For a start, the internet is built to be available on demand. The behaviours we develop around websites, social media and online interaction typically assume asynchronicity.

Imagine if a web store was open only from 9am until 5pm, as with a physical shop. Social media, especially, has been built around the idea that everyone engages at different times of the day. Concepts such as timelines and notifications would be meaningless if we all agreed to meet on Twitter at the same time each evening.

One of the great things about on-campus teaching is that we have some control over the environment. And (we hope) everyone has a similar experience because we can control it. Suppose somebody is loudly mowing a lawn outside your seminar. At least you know it’s (theoretically) impacted all your students equally. This is not true of the virtual classroom; everyone’s home environment can uniquely affect their synchronous attendance.

I’m not suggesting that attendance measures aren’t valuable. In the physical classroom, attendance monitoring can be helpful. A drop in attendance can often be an early sign that someone needs support. Combined with the intuitive ability to observe motivation and attention in a class, it was often an adequate measure of student commitment. But moving online has highlighted the severe limitations of this rather blunt tool. As it looks likely that the sector will embrace more blended learning moving forward (a good thing), we will need to reckon with this challenge of measuring engagement.

As Linda Kaye noted in her excellent article, “focusing on specific online behaviours and using analytics to measure them only really relates to behavioural engagement and may in fact signal students’ engagement only with the technology itself”. Other tools such as formative assessment, tutor engagement and participation in asynchronous activities could better indicate students’ commitment to their learning.

Understanding our students is an essential tool in driving improvement, and arguably few things are as intrinsically disruptive to education as metrics. The measures we use will inherently drive strategy across the sector as they always have. 

As such, deciding what we want to measure and how we want to measure it will help to shape the vision of the evolving sector. But we must be careful about assuming parity between how we consider online and physical spaces, as user behaviours and the barriers to access are inherently different. Let’s make sure we don’t confuse engagement with logging in.

Chris Headleand is director of teaching and learning for the School of Computer Science at the University of Lincoln.

Standfirst
When assessing the all-important ‘engagement’ metric, the sector often defaults to the crude measurement of attendance − which is clearly flawed, says Chris Headleand

comment1

THE_comment

2 years 8 months ago

Reported
False
User Id
2568426
User name
basdenleco1
Comment body
Fascinating article. Reminiscent of last year undertaking emergency remote teaching my electrical trade apprentices. Found by interjecting Kahoot quizzes found out quickly who was actually engaged.
jwt token
eyJhbGciOiJIUzUxMiJ9.eyJuYW1lIjoiYmFzZGVubGVjbzEiLCJlbWFpbCI6ImJhc2RlbmxlY28xQGdtYWlsLmNvbSIsImlkIjoiMjU2ODQyNiIsImlhdCI6MTYyNzE3NDc0OSwiZXhwIjoxNjI5NzY2NzQ5fQ.Vvd-F-j0XHl2NShk1A6ZKCVIayuWV2ip5QxAMDpQCXTmfqPgDtnRYXNUP38XzUjz8MnSo0xadjyaxpxsOYko7w
Reviewed
On
User