This article is more than 4 years old

Starting student success initiatives on the right foot

As universities prepare to welcome students back on campus, John McMillian sets out the principles of how to identify the data you need to measure the impact of student success initiatives
This article is more than 4 years old

John McMillian is a Senior Director at EAB.

As the dust settles from Clearing and universities prepare to welcome the incoming batch of students, most will be considering how to maximise those students’ chances of future success from their very first day.

Much of induction into higher education is rightly focused on preparation for academic study, forging connections between students, and between students and university staff, and encouraging students to get involved and find their feet in university life. What’s sometimes missing is a systematic plan to capture the relevant data that would enable universities to measure whether interventions in those students’ higher education journey – not least, induction itself – are having any measurable effect on their future success.

“Measuring student success” is now a top query on EAB’s website. Student success leaders want to link measurable retention gains and grade improvements to specific interventions and initiatives. They want to know what works and what doesn’t, so they can refine and improve over time. Positive results also help justify departments’ work and make the case for additional resources. But in the overall picture of a university’s student success activities, it can be difficult to isolate the impact of individual initiatives. Amid the flurry of student success initiatives taking place on campus, it can be difficult to separate and define the impact of each.

The key is designing student success interventions strategically to make it possible to attribute certain outcomes to them. The below takes our work with universities in the United States as an example, but we’re finding that the principles translate readily to the UK, even if the way students accumulate credit and progress towards a degree is different.

Identify a student population you want to focus on

While there are some student success initiatives that reach the entire student population, you should make identifying a specific student population your first step to improve measurement. Narrowing the scope of your efforts makes it more likely that your intervention will be meaningful and make its impact easier to isolate. The definition of the targeted student population should relate to the success outcome you are trying to measure – to give an obvious example, achievement probably correlates with progression. The student population should be facing a specific challenge: for example, those who are not submitting assessed work or whose grades are persistently just below the threshold for progression in their first term, or those who are not attending meetings with personal tutors.

East Tennessee State University (ETSU) approaches this by asking each college to select a focus population for targeted campaigns. Each college is asked, “Which students do you want to reach? Where do you want to see an impact?” For example, the College of Business and Technology decided to focus on second-year students in good academic standing who had not yet declared a major. They sent an email campaign asking these students to meet with their advisor to develop a graduation plan. After the campaign was sent, the college saw a 10.5 per cent persistence improvement for second- to third-year students.

Develop a theory of change

To develop a theory of change, you need to map backwards from your long-term goals, determine the necessary pre-conditions for that result, and describe how the intervention you designed would produce that change accordingly. Without a theory of change, a lot of positive (or negative) outcomes can be misattributed to your intervention.

ETSU’s leadership ensures that each department articulates this theory of change by submitting a statement of purpose with their proposed campaign. The College of Business and Technology realised that sophomores who remained undeclared into their third year wouldn’t be able to select upper-level courses that they need to complete in order to graduate, so the theory of change posited that encouraging those students to meet with advisors to put a plan in place would lead to those students having the information and support they needed to graduate on time.

Assign process and outcome metrics

Based on the theory of change you’ve articulated for your student population, you can determine what intermediate process metrics and ultimate outcomes you’ll be measuring.

A process metric is a metric that you can measure in real-time, or at least during the term, that is aligned with an intermediate outcome of your intervention strategy. Process metrics help you determine if your intervention is contributing to your overall student success goals, which you measure with outcome metrics.

Because outcome metrics tend to only update at the end of the term, year or student cycle, interim checks on your process metrics can allow you to course-correct. Examples of process metrics are course attendance, submission of work, concern flags raised by lecturers or attendance at academic support or personal tutor appointments. Examples of outcome metrics are end of year academic performance, student retention and progression. Make sure your process metric aligns with your theory of change and can be gathered in time to adjust plans and reinforce the intervention before the event that produces the outcome metric happens (eg a final exam).

Assess results and iterate

The ultimate goal of impact assessment is to determine if your intervention benefited your students. For a meaningful analysis it’s helpful to compare the results tracked for the focus population against one of the following:

  • A control group of comparable students that is not receiving the intervention
  • A past group of students who meet the same criteria
  • Earlier results for the same population before and after the intervention

That comparison data allows you to judge the success of the intervention, and determine whether the benefit gained was worth the investment of resource. Though the process of measuring the impact of interventions is arduous, it is much more effective in the long run than simply increasing the volume and scale of student support, some of which may not be having the results that were originally predicted.

This article is published in association with EAB. EAB partners with more than 1500 education providers in the United States, Canada, the UK and Ireland to address critical challenges, accelerate progress, and drive results in enrolment, student success and institutional strategy. Find out more about EAB’s work in the UK.

2 responses to “Starting student success initiatives on the right foot

  1. Insightful article – in more targeted interventions Peer assisted learning for example, how do you solve for self selection and/or non participation bias?

  2. Nathan raises an excellent question.

    Initiative efficacy measurement opens a new can of worms in higher education. Historically, cohort to cohort comparisons were deemed adequate to determine initiative efficacy; however I cannot see how those practices should continue given all the dynamics in play among post-secondary institutions.

    Directly comparing two cohorts with assumptions of homogeneity is poor design from a research perspective, and potentially harmful from a practitioner perspective.

    It seems to me that the data should be utilized to formulate proper comparison pools, and student characteristics predictive of the outcome should be used to, minimally, use propensity score analysis for adequate matching. Where possible, student-adjusted risk scoring should also be used so a multi-dimensional matching approach is implemented to ensure students are compared who are both academically risk-adjusted (e.g., some kind of success prediction), and adjusted for self-selection and non-participation bias (i.e., using a propensity score with appropriate IVs in the model).

    Cohort to cohort comparison of outcomes is insufficient to inform policy and funding — although it may serve as a barometer for trend analyses, it would not be ethically wise to use and assume causality.

    Among higher education institutions, we should determine what minimum criteria for matching will be used to assume high confidence correlation and / or to infer causation and efficacy.

    How do we control for incoming student characteristics? — how the funnel is shaped in the enrolment and admissions process becomes important here

    How are we taking time into account as a key component of the student’s experience across a term?

    How are we controlling for other factors such as multiple intervention exposure, life and circumstances facing the student not directly represented in the data?

    These are just two key questions that are extremely difficult to address in a simple analysis and are not controlled for in a cohort-to-cohort comparative analysis.

    The work is worthwhile, and important. So are the assumptions around handling the data preceding such important initiative efficacy work.

Leave a Reply