During my first year as commissioner of education in Tennessee, we had a meeting to review the state Teacher of the Year process. The scoring rubric had ten criteria. One of the criteria was “advancing student achievement.” I asked, “How much does student achievement count in the scoring?” The answer: 10%, same as every other factor.
We changed this because – of course we did. If we evaluate a teacher by saying that advancing student achievement is equal to nine other things, we have lost the plot. School starts and ends with student learning. It’s not that other things (engaging in the school community, communicating with parents, etc.) don’t matter. But, we should never pretend that student learning is just one small sliver of the work.
Across the conversations at this year’s Accelerate convening, and across three years of research from Accelerate’s Call to Effective Action, one lesson keeps surfacing over and over again in our high-dosage tutoring work: Dosage Matters. It matters more than other factors in tutoring.
That may sound obvious. It may even sound boring. But it is the central implementation challenge for tutoring, and for many other education interventions.
The research is remarkably consistent. When programs successfully deliver tutoring, students tend to learn more. When they don’t, the results aren’t there.
If you are a district and you intend to implement, manage and monitor tutoring dosage, it’s a remarkable investment with stronger return than any other highly-researched intervention. If you can’t or won’t, you should save your money.
Across the most recent Accelerate CEA cohort, 5 of 7 tutoring programs evaluated with rigorous research designs produced statistically significant improvements in student learning – often a lot of learning (1.5 to 15 additional months). In the programs that did not show an effect, low dosage was the most likely explanation.
The previous CEA cohort showed the same pattern. Programs that produced measurable gains generally delivered at least about an hour of tutoring per week. Programs that fell short on dosage rarely produced meaningful impacts. Even before that, the first CEA synthesis highlighted the same problem.
We spend a lot of time talking about tutor-student ratio, who the tutor is, the training involved, the materials, the tutor-teacher pipeline. These are all important in the realm of “good to great.” But in the end, the difference between tutoring that works and tutoring that doesn’t is often whether students actually receive enough of it.
Driving towards dosage
Improving dosage should be the primary driver of continuous improvement efforts by providers and schools. Protecting dosage, even at scale, is ultimately about execution. Knowing that does not offer useful, scalable insights into what everyone should do to beat the implementation gap. Nonetheless, based on our collective experience working in school systems, based on the research, and based on our own grantee partners, there are several hypotheses about how to drive up dosage that don’t require a fine tuned choreography of many people doing exactly the right thing at the right time.
- Across grantees who achieved strong dosage in our most recent grant cycle, there was a common commitment to having school-based champions. These champions were able to problem solve in real time and maintain tutoring dosage as a priority within the building. What were these school champions doing to solve problems? We don’t really know. All we know is that implementation challenges are often multi-factorial and mult-faceted and therefore the solutions often require local creativity.
- Finding time in the schedule for tutoring is often a major challenge, especially for students who may receive multiple supports. One way to address this is to fully integrate tutoring into pre-existing intervention blocks. There are already indications that tutoring outperforms status quo tier 2 interventions. This approach also moves tutoring closer to full instructional coherence with the rest of the academic strategy, which can also increase learning.
- Knowing whether the right students are getting enough tutoring so that problem solving can occur, outcome monitoring can happen, and accountability processes are possible seems obvious. But setting up the data infrastructure necessary to have implementation visibility in real time remains an ongoing challenge.
Having a school-based champion, with the implementation visibility necessary to solve problems, and the time during the day to execute, should add up to more consistent dosage, and more reliable results, even at scale.
The role of states
Schools and districts control the daily implementation. But states shape the conditions that make dosage possible.
States can build the data infrastructure that makes tutoring visible. States set the expectations that if districts are going to use state dollars to do tutoring, they need to have the data to prove that it’s happening.
States can shape the policy environment, encouraging tutoring during the school day and integrating it into broader intervention systems.
And they can bring transparency to the tutoring market by reporting publicly not just where tutoring exists and who the providers are, but who is delivering the highest dosage, who is testing themselves with rigorous research, and who is doing it all in a cost-effective manner.
This requires a commitment to two things: 1) Accountability. States need to drive the simple principle that districts and schools are accountable to deliver services, and that the state will publicly report results – including recognizing systems and schools with excellent delivery, and calling out and even removing funding from weak performers. 2) Requiring districts to report dosage at the student level. Too often, state education agencies are worried about political pushback when they require reporting. In this instance, the tie between one key data point and student learning outcomes means that states must lean in, regardless of local pushback.
What we still need to learn
The next phase is not about whether tutoring works. It is about execution. That means scheduling. Attendance. Data systems. Implementation discipline.
But it also means learning. It means directly testing our hypotheses about data visibility, intervention system integration, and school-based champions. It means looking across the existing evidence-base to understand how much tutoring is enough. It means continuing to replicate evaluations of existing programs, in new contexts and new states. It means collecting rigorous cost data to better assess tradeoffs across tutoring options, and also across all interventions.
Despite the challenges inherent in growing dosage, I view this as a positive story. It’s a hell of a lot easier to manage one key driving factor than most things we do in education. In many ways, this is about will more than skill. If we can beat the implementation gap at scale then we can turn a good bet into a great bet for millions of students.