MAT Analytics is hard. At primary, we think we've cracked it.

Multi Academy Trust analytics is fiendishly hard to do well. Gathering and interpreting data from EYFS to Key Stage 5 across diverse schools is a huge challenge in two ways:

  • Conceptually, you want to compare and analyse the performance of all your schools in a reliable fashion. That requires a common set of measures across all your schools. Academic data is particularly tricky: schools collect plenty of it, but most in-school assessment data is not fully standardised, so comparisons between schools may not be accurate. You’ll also need to define a bunch of other Key Performance Indicators (KPIs) to cover things like attendance, exclusions, finance and so on. Agreeing on what to track - and persuading all schools to use the same system - is no mean feat.

  • Technically, you’ll face huge hurdles. Once you’ve defined your KPIs, you’ll find that they sit in several different systems which don’t talk to each other. And if you somehow manage to pull them together into one big spreadsheet or database, you’ll almost certainly find the process incredibly time-consuming, with the underlying data quickly becoming out-of-date.

In this blog, I’m going to focus on how to handle primary analytics in Multi Academy Trusts. That’s because at Assembly, working with MAT partners such as Astrea Academy Trust and Ark Schools, we think we’ve cracked it. (We’re working with those same groups on secondary analytics too - more on that to follow at a later date…)

Before we tell you how we’re approaching the issue, it’s worth taking a moment to describe the systems that are most commonly used currently for primary analytics - and why they don’t yet solve the whole problem:

  1. KS1 and KS2 SATs reporting tools are widely available, including the DfE’s Analyse School Performance (formerly RAISEonline). However, since they only deal with national exams, these can only ever tell you how two year groups out of seven fared. And, in the case of the KS2 children, it’s giving you that information at precisely the point when you can no longer do anything about it, because they’ve already left you for secondary school. So you’re not really looking at management data; you’re conducting a post-mortem. What’s more, when it comes to KS1, you have to take into account that the raw data comes from Teacher Assessments, and, as we’ve blogged about previously, the Education Datalab and others have shown this to be problematic from a reliability perspective.

  2. Most primaries use an Assessment Tracker to gather formative data on pupil progress, usually by asking teachers to record judgments against curriculum statements or progression frameworks. Many teachers and schools swear by these as in-class tools for keeping track of performance and planning interventions day-to-day (and Assembly partners with a great one called Balance). However, in our view, you encounter serious problems when you start aggregating such data to act as your summative assessment model for comparing schools to each other.1 This is not just because Teacher Assessments are sometimes unreliable as mentioned in (1) above; it’s also because you skew the incentives for staff. In other words, if a classroom teacher knows that a Director of Education will see aggregated data from their classroom tracker, and possibly use it for performance management purposes, there will be an inevitable tendency to present those results in the best possible light, even if that means the data becomes less accurate for its original formative purpose. And of course, two teachers at different schools in a MAT may have a very different interpretations of when a criterion is “met” or “mastered”. That may not matter so much in their respective classrooms - they're professionals who know their own internal judging scale, after all - but it is a big problem when you try to contort those judgments into good quality comparative data. And to cap it all, even if you do manage to gather somewhat reliable data, it’s almost impossible to set this in a truly national context, since you don’t have an easy way of benchmarking your trust against other schools.

  3. The school MIS contains a large amount of valuable data, including attendance and exclusions, but the MIS alone will often struggle to cover all your MAT analytics needs. For a start, many primaries record assessment data elsewhere (in a primary tracker, for instance). Then, many MATs have multiple different MIS used within the trust, so even if the vendor has a nice MAT data portal (and certainly, some do), it may well be structurally impossible to see all schools’ data within one MIS ecosystem. Oh, and the MIS isn’t your finance system, so that’s another thing you’d be missing..

  4. The finance system will of course contain finance data (durr)... but it won’t have access to all your other school data, like assessment, attendance and exclusions.

In our experience, what this means is that Primary MATs create a complex central spreadsheet that attempts to aggregate all the relevant Key Performance Indicators (KPIs) from multiple data sources into a single set of dashboards. Sooner or later, this becomes a high-maintenance, glitchy, clunky file, with questionable-quality data. Surely there has to be a better way?

Well, the good news is that at Assembly, working with our MAT and industry partners, we think we’ve cracked it. Welcome to Assembly Analytics.

Assembly Analytics online demo

Here are the standout features:

  1. Assembly Analytics combines data automatically from multiple sources. Because we’re an analytics platform, we don’t try to be the data source. Instead, we connect to a range of MIS, finance and assessment systems. We then offer dashboards to MATs that present this data in a simple and intuitive format.

  2. We add detailed benchmarks to contextualise the data. Without benchmarks, it’s very hard to make sense of MAT data. So we work out how to create contextual scales for all our headline measures, so you can see how your school or MAT fares against all schools nationally. Sometimes we use public datasets; other times we use FOIs to request national benchmarks; on occasion we create them ourselves. But always, the end goal is to give a school a percentile reliably reflecting their performance in a national context.

  3. We work with sector-leading standardised assessment partners such as RS Assessment from Hodder Education (via their PUMA and PiRA assessments for reading and mathematics) and No More Marking (for writing assessment). We can’t overstate what a difference this makes. Since these assessments can give a standardised score multiple times per year for years 1-6, this allows us to display performance measures across multiple schools and year groups within a MAT. And because the results are standardised using the same scale, we can also measure progress over time.

We’ve been working with Ark and Astrea to refine this approach over the last year, and we’re now also working with Windsor Academy Trust and Samuel Ward Academy Trust. By treating these relationships as partnerships, we are able to get hands-on feedback from those trusts, which in turn helps us to improve, expand and refine the product. Assembly is a non-profit linked to Ark, so it helps in those conversations that our mission is not to sell more dashboards, it’s to improve the quality of decision-making in schools.

If you’d like to know more, head to the demo on our website and have a play with Assembly Analytics for yourself. And, if you’d to have a chat about how Assembly Analytics could work for you, drop me an email at jperry@assembly.education.

1 For an in-depth exploration of the complexities of assessment, including the differences between formative and summative assessment, I strongly recommend Making Good Progress by Daisy Christodoulou (formerly Head of Assessment at Ark, and now Director of Education at No More Marking).