MAT benchmarking is a complex beast. The DfE releases MAT performance tables in January, but until then there is nothing official to work from, even though provisional KS4 data is released in October. Furthermore, once the DfE does release data, they only count schools if they’ve been with a MAT for three years. There is a decent rationale for this: from an accountability perspective, it may be harsh to include a school in a MAT-wide measure if they’ve only recently joined, and possibly in difficult circumstances. However, this does mean that it’s hard to know the broader context of a MAT across all its schools without delving deep into the data for yourself...
So to help, Assembly has updated the MAT part of its free, public benchmarking tool to include provisional KS4 data. Doing so was more of a mission than you might think, as there are multiple places you can look on the DfE’s website to establish which schools are in which MAT, and the various sources don’t always reconcile. We also had to do some work to combine the DfE’s “Trust” and “Sponsor” fields, which can spell a MAT’s name differently. (See the bottom of this page for a summary of the main technical choices we made in putting this tool together.)
The tool can be found here. Here is a brief recap of what you’ll see:
We contextualise a MAT’s overall performance as if it was a school. It’s easy to compare the results of MATs against each other, but that doesn’t tell you whether any or all of those results are good in a national context. So we weight the results of all the schools which have been in the MAT for at least a year by pupil number to produce an overall MAT average. We then show what percentile these results would equate to if the MAT’s results were those of an individual school.
We show you all schools’ results side-by-side. This makes it easy to delve into the MAT average to look at the performance of individual schools, and how that contributes to the overall average.
We allow you to filter and compare your overall MAT performance. When it comes to comparing MATs against each other, it’s often hard to do a meaningful comparison. A MAT is not a uniform ‘thing’, and so you need to think about how best to compare like with like. To help you, we’ve included a ‘Comparative View’ tab on our dashboard, allowing you to filter MATs by two of the key criteria that make them different. Firstly, you can limit the list to include only MATs with a certain number of schools at the given phase. We’ve started by including all MATs with 3 or more schools with results for the key stage, but if you’re a large multi-academy trust you are likely to find comparison most helpful with similarly large groups. You can then also filter the list to show the results of MATs only for schools who have been with the trust for a certain length of time. This allows you to look at MATs just focusing on the schools that they’ve had the most amount of time to influence.
More broadly, our general principles on how to compare and visualise data remain the same. The MAT benchmarking tool uses our house style, along with percentiles and deciles to contextualise results in more granularity.
If you like what you see, feel free get in touch to talk to us about Assembly Analytics, our market-leading data analysis tool for MATs, which extracts data from a wide range of your internal key systems to show you all your key performance metrics in one place.
Technical choices we made
- The underlying data is a combination of the KS4 provisional data, the Establishment fields file and the Academy sponsor and trust links file, all made available by the DfE.
- We grouped schools together using URN and UID in order to identify all MATs with 3 or more secondary schools that have results in the provisional KS4 release. We excluded special schools, which left us with 140 MATs.
- We created weighted averages for all key measures, using the relevant pupil numbers for each measure. (The DfE helpfully offers a different number for pupils for attainment and progress measures, since there will often be fewer children included in the progress calculation).
- We used the national averages for state-funded schools (RECTYPE=7 on the provisional file), rather than the all school figure (RECTYPE=5), since we think this is the right comparator for the task in hand. However, MATs sometimes refer to the all-school figures, which tend to be a little lower.
- We calculated the number of years a school has been in a MAT for ourselves, by comparing the school’s join date with 01.09.2018 in order to determine how many years a school has been part of a group.