Now you can benchmark the performance of your Multi-Academy Trust too!

Last academic year we launched our free benchmarking tool for key stage 4 and then for key stage 2. Today we’re excited to announce that we’ve extended our suite of free tools to include tools designed for multi-academy trusts (MATs). You can now benchmark your MAT’s overall performance, as well as break this down by individual schools using our already familiar house style of analytics.

For individual schools the task of wading through mountains of public data to contextualise your performance is daunting (not to mention more time-consuming than is feasible for many schools). For a multi-academy trust this difficulty is multiplied several times over. Furthermore, it’s not immediately obvious exactly how you should use this data to benchmark your MAT once you have it. But never worry, Assembly to the rescue: as well as doing the necessary structural work on the data, we’ve also thought about how this data is best interpreted and understood. Here are the key points:

  1. We contextualised your MAT’s overall performance as if it were a school. It’s easy to compare the results of MATs against each other, but doesn’t tell you whether any or all of those results are good in a national context. We weight the results of the schools within the MAT by pupil number to produce the overall MAT average. We then show what percentile these results would be if the MAT’s results were the results of an individual school.

  2. We show you your schools’ results side-by-side. Following our principle that analytics work best when you drill down from the highest level to more granularity, you can delve into your MAT average to look at the performance of individual schools side by side. Much like the overall MAT score, their performance is contextualised in the house style you will already be familiar with from our single school benchmarking tool.

  3. We show your overall MAT performance against other MATs, allowing you to filter by the things that make them different. When it comes to comparing MATs against each other, it’s hard to do a meaningful comparison. A MAT is not a uniform ‘thing’, and to compare them is not necessarily comparing like with like. To begin to address this problem, we have added a ‘Comparative View’ tab to our dashboard, allowing you to filter MATs by two of the key criteria that make them different. Firstly, you can limit the list to include only MATs with a certain number of schools at the given phase. We’ve started by including all MATs with 2 or more schools with results for the selected key stage, but if you’re a large multi-academy trust you are likely to find comparison most helpful with similarly large groups. You can then also filter the list to show the results of MATs only for schools who have been with the trust for a certain length of time. This allows you to look at MATs just focusing on the schools that they’ve had the most amount of time to influence.

  4. In addition to this, our general principles on how to compare and visualise remain the same. The MAT benchmarking tool is still in the house style that we originally launched, so requires minimum interpretation for those already familiar with our single school tool (and is designed to be intuitive for those who aren’t). We still use percentiles and deciles to contextualise results in more granularity. And, of course, we still advise looking initially at progress rather than attainment measures. The problem of differing school baselines applies at least as much to MATs as it does to individual schools, so once again they are the fairer measure.

We are always keen for feedback, and this applies more than ever with a tool that is a new concept. Whether you love it or hate it, we want to know what you think. rachel@assembly.education to share your thoughts. We’d love to hear from you!

Finally, some caveats about the data:

Linking schools with MATs is not particularly easy in the national data set. There is a particular difficulty with dates of when schools joined a MAT. We’ve tried our best to work around this: we feel it better to do something which has the potential to be useful rather than discarding the idea entirely because of data issues. However, please be aware that we can’t guarantee that there won’t be any errors with assigning schools to MATs or with how long they’ve been in them. Obviously we’re more than happy for you to let us know about these if you spot an issue!

In addition to this, there is also no way to retrospectively build a history of a school's MAT membership. Some schools have been in more than one, and there is no public data that allow us to track this, as far as we are aware. This means that it's not possible for us to build a tool which looks at MATs over time.