An apple a day: is ‘dosage’ a useful concept in understanding the impact of youth work?
2022-09-13
For this month’s Our Thoughts, our CEO Bethia McNeil considers the concept of 'dosage', a core component within the 'What Works' approach, as a means to better understand the impact of youth work.
I’ve spent a lot of time recently thinking about ‘dosage’ in youth work and provision for young people. Dosage – a measure of ‘exposure’, or the amount of or frequency with which you take something – is a core component within ‘What Works’ thinking. The ‘What Works’ approach has many similarities to medicine (including the models for evaluating the impact of medical treatments), and we will all be familiar with the idea of dosage when we are prescribed antibiotics, for example, or referred for a course of physio. The dosage of these medical interventions represents the amount, frequency and intensity of a thing that evidence suggests we need in order for it to have the intended impact for us as individuals – curing our throat infection, for example, or improving our mobility post-surgery. Quality matters too, of course. Dosage is defined based on exposure to high quality, ‘active’ ingredients, whether this is a skilled physio or the right strength of penicillin.
All evidence-based programmes have a defined dosage: how much, how often and with what intensity is a ‘beneficiary’ intended to engage with or experience a project or service? As in medicine, dosage in social programmes is closely related to hypotheses about impact. Too little and it’s probably insufficient to create lasting positive change (and indeed may even do harm); too much might have adverse or unintended consequences. A good example of where dosage really matters is mentoring. Multiple mixed-method evaluations of mentoring suggest that a ‘low dosage’ is not enough to create a measurable difference to outcomes for young people, and indeed may be actively unhelpful. At the same time, a dosage that is too high may undermine the benefit, or create poorer outcomes. Thinking about this in terms of relationships makes sense: a low dosage means there’s a risk that the relationship between the mentor and mentee hasn’t been established, and that trust has not had time to be developed and demonstrated. Exploring deeply personal or potentially traumatic experiences in this context risks de-stabilising or even re-traumatising a young person: a higher ‘dosage’ of mentoring is needed to get to the place where this feels safe, and potentially transformational. At the same time, a mentoring relationship that doesn’t have a clear and supportive ending may create dependency, and undermine the development of agency and self-belief.
And even beyond mentoring, the concept of dosage can make good sense. Thinking about provision as high, medium or low dosage can help us to envisage important elements of an ‘offer’ for young people. It can also help in clarifying the intention behind provision. Is it intended to be light touch, or a sustained, regular and deep commitment in a young person’s life? It can help us create a shared language that differentiates between one-off or short-term activities (like summer events or running an assembly in a school setting) and consistent, sustained activities – and the relationships that emerge from them (like support groups, weekly clubs or mentoring).
But the potential misalignment between the ‘What Works’ approach and informal/non-formal youth provision is well documented. This includes how we define and understand dosage. Dosage is relatively easy to measure or monitor in structured, programmatic provision. Did Young Person A attend the weekly sessions each week for the full 12 weeks? Did they stay for the full hour? Did they participate in the intended activity each session? But what if provision is not programmatic, but ‘rolling’ or drop in? What if provision doesn’t have a fixed start and end time, and young people are able to participate freely, whether for 15 minutes or two hours? And even more challenging, what if the ‘offer’ to young people includes a range of activities, and young people can move freely between them – gym, Playstation, cookery, and one to one support from a youth worker? This makes dosage challenging both from a measurement perspective – how can we understand which young people were ‘exposed’ to what, for how long and with what frequency? – and from a design perspective – what’s the ‘intended dose’ for each of these activities, and why? In ‘demonstrating impact’, how do we understand and explain the difference between the young person who attends once and the young person who attends every week, for months at a time? The young person who attends twice weekly but mainly to play Playstation with mates, and the young person who attends every month, but always for a one-to-one conversation with a youth worker? And how do we (or even should we) ‘diagnose’ which young person would benefit from which dosage, and of what provision?
And then there’s the challenge of quality. The correct ‘dose’ of penicillin is much easier to define and monitor than youth work. The youth sector has adopted multiple proxies for quality, including levels of training and compliance with policies and legal frameworks (like safeguarding or the UNCRC). Sometimes, cruder measures of ‘outputs’ (the number of young people participating in particular sessions) is used as an indicator of quality. However, none of these things have been tested or evaluated in terms of their relationship with impact. Dosage – as a measure of hours or frequency – is potentially meaningless without a clear, shared understanding of quality, as it relates to the experience of young people. Add to this notions of equity – that young people with most to gain, should gain the most – and we find ourselves in even more undefined territory.
Finally, there is the issue of analysis. Dosage in medicine is determined in large part by the response of the ‘average person’ – or the average of all the people treated. This is often defined by things like age and weight. But is there an ‘average young person’ in youth work? Is this more determined by social factors than age and weight? And aren’t we all learning that ‘averages’ are too often blind to issues of race, ethnicity and disability? Averages are one way to understand impact when most things are constant (like taking a seven-day course of penicillin) but what if all the variables are different? Does it make any sense to focus on dosage at all?
It's perhaps no surprise that all these questions remain very ‘live’ – the new Youth Investment Fund, with its strong bias towards buildings, is an excellent example of why they continue to matter. These questions are complex, nuanced and in many cases, have no one right answer. However, neither does it make sense to talk of the ‘impact of youth work’ through one narrative frame. The ‘input’ differs widely – in terms of quality, frequency, intensity, duration and activity – and so, of course, does the outcome. Outcomes emerge in response to all of these elements, experienced through the reality of one young person.
But… I still find the concept of dosage compelling. I feel instinctively that it offers us a collective concept for understanding both intention and design in relation to provision for young people, and a frame for how individual young people engage in this provision – we all know that this is often not ‘as intended’! It also encourages us to think hard about quality, instead of taking for granted that our offer to young people is consistently as good as we’d like it to be. And, because I’m fascinated by measurement, I continue to think hard about how we might record dosage, in the most human and relational way as possible. Watch this space – I am fully intending to progress these ideas over the course of this year, in the hope that I soon have more than just a set of open questions.