The answer to the question posed in the headline is: no one knows precisely. Imagine an industry that had invested more than $1 billion in strategic initiatives but had not conducted any coordinated or comprehensive evaluation into overall effectiveness or impact. This is exactly what has occurred in Australia’s higher education equity field, which, since 2010, has seen an overall investment of nearly A$1.5 billion (£792 million) from federal government to improve the participation rates of equity groups in universities.
To address this issue, the Department of Education Skills and Employment commissioned the creation of the Student Equity in Higher Education Framework (SAHEEF), which will be rolled out across the higher education sector this year. Creating this framework is a much-needed first step in the evaluation of equity programmes and associated expenditure across Australia.
However, for it to be successful, implementation of this framework needs careful consideration. We have spent the past five-plus years creating a universal framework for equity programme evaluation at the University of Wollongong, including designing and implementing staff training, development of an online action planning tool, and supporting the planning and execution of equity programme evaluation. On the basis of what we’ve learned, we offer the following five messages for policymakers, university equity staff and academics involved in the implementation of the SAHEEF.
Message #1: Fear factor
Never underestimate the apprehension that the word “evaluation” evokes in people. In interviews with programme developers who were tasked with (or involved in) evaluation, our team was repeatedly struck by the perceived “imposter syndrome” articulated by staff in terms of conducting evaluation.
Building capacity and confidence in those involved in implementing evaluation must be built into any evaluation strategy. This needs to build upon things they know best. Framing can help here: staff may intuitively know the impact their programmes are having, but they need the tools and capacity to demonstrate this.
To ensure staff feel empowered and confident to design and implement evaluation, bespoke and targeted training for equity staff must be prioritised. Staff should also be reassured that evaluation results will be used to help them show evidence of the good work they do and identify areas for growth, rather than for accountability. This will help allay fears that could lead programmes to narrow their focus and simply “teach to the outcomes” (for example, admit more students from equity groups to show an increase in admissions among students from equity groups).
Message #2: Create community
If it takes a village to raise a child, it takes a community of equity practitioners to create a coherent, whole-student, whole-journey, national perspective on what is (and is not) working in the current suite of higher education equity offerings. While national and institutional leadership is key, equally, creating a community of practice between institutions is required, including learning from the “voices” of equity group members, to ensure we share those lessons and effective practices.
By sharing practice across institutes, equity practitioners can build toward further ideas and innovations while harnessing support and professional relationships across institutes for a shared purpose and alignment under the forthcoming SAHEEF.
Message #3: Evaluation for equity
Evaluation should not focus solely on whether and to what extent a programme worked upon its completion. While that gives us important insights about what does and does not work, it fails to help students as they engage in that programme. Evaluation should also have a formative function – identifying the opportunities for improvement while there is still time to address them.
To achieve this, evaluation should occur at regular points in any programme, with rapid feedback that gives time for developing possible solutions and/or actions arising from these data. However, do not underestimate the time, resources and intentionality needed to achieve this – doing “good” formative evaluation can be challenging, requiring meaningful integration into a programme and alignment with the end-of-programme summative evaluation.
Message #4: Expanding the evidence
We know that increasing numbers of equity students are entering higher education, but is this enough to say the funding has been worth it? Or that the funding has been used to best effect? Our team argues that there is a need to measure how effective engagement in equity programmes is for the individual student. Just as we would want to know that any educational programme achieves better outcomes compared with not engaging in the programme, we need to be able to use that same benchmark to support ongoing funding.
This requires answers to questions such as: what are the outcomes for students who can and do engage in equity programmes in higher education, compared with similar students who can’t or don’t? What are the optimal targets, sequences and timing of programmes? Is there a cumulative impact of engaging in more than one programme? To simply say that more students are entering higher education, and graduating at a rate of X per cent, is limiting. If we can do even better, we must. If there are other areas in which we are not doing as well, we need to know.
Message #5: Local to global to local
For a national evaluation to work, there needs to be a common core of outcomes that are assessed by all programmes and institutions – things such as admissions, grades and graduations. At the same time, equity staff and stakeholders need to have local control over their evaluation process, as they will have specific institutional/regional concerns and considerations that programmes need to address.
We argue that, to be effective, equity evaluation needs to be derived from the bottom up, but equally should align to a top-down framework that ensures articulation between the evaluations. To support this fluency and flexibility between the local and global, we advocate a shared model of evaluation that is informed not only by funders (for example, government departments) but also those who know programmes and community needs the best – those who deliver the programmes, their students and their community.
Steven Howard is a researcher at the University of Wollongong (UoW), Australia, with particular expertise in programme evaluation. He co-led the creation of UoW’s Equity Evaluation Framework and Evaluation Action Plan.
Sarah O’Shea is director of the National Centre for Student Equity in Higher Education. She has also evaluated numerous equity programmes and, alongside Steven, led the design and implementation of the UoW framework.
Kylie Lipscombe has more than 20 years’ experience teaching and researching in the areas of school-university partnerships, evaluation and educational leadership. Currently employed at UoW as an educational researcher, Kylie has contributed to the design and delivery of national, state and institution evaluation frameworks.
Kellie Buckley-Walker is an educational researcher at UoW with more than 20 years’ experience in teaching and researching in schools, with a particular focus on assessment and evaluation. She has contributed to the design and implementation of the UoW framework.
comment