The Assembly Analytics demo gives you a glimpse of our Multi-Academy Trust Analytics product

We’ve recently launched Assembly Analytics - a multi-academy trust dashboard, developed in partnership with Ark & Reach4. We wanted to give you a chance to experience the product for yourselves, so we’ve created an online demo to do just that. See for yourself how to harness the power of multi-school, multi-source data transformed into great analytics.

In our launch blog we outlined the six education data problems that Assembly plans to solve. We’ve been busy working on all of them, and have now released a demo of our multi-academy trust analytics product to show what we’re doing about problem #5: “Academy groups were created, but software for them was not”.

The Problem

Multi-academy trusts (MATs) need data from across multiple schools. This data helps them have the required oversight, e.g. to know which schools are underperforming in what areas, which helps them to allocate resources to best improve pupil outcomes. Getting hold of this data brings with it a world of pain: schools may be using different management information systems (MIS), and some crucial management data is likely to be stored elsewhere.

Even if you did manage to overcome these formidable barriers, you’re still left with the problem of what to actually do with all this data: how do you build dashboards which distill this mass of data into key metrics and meaningful insight?

Our Solution

Enter Assembly Analytics. Not only do we bring everything together in one automated set of dashboards, but we also add great thought, care and research around education best practice and user-friendliness. For this we are lucky to have worked with some brilliant partners: our approach is rooted in the ethos of our co-founder and friend Ark Schools, and we’re also lucky to have worked with Reach4 on the development of this growing suite of tools.

To those who have already used our benchmarking tools, the look and feel should be very familiar. And deliberately so: consistency of design is an important principle of ours, and helps towards keeping the dashboards simple and intuitive.

Our Approach

  1. We use standards-based comparison. We focus on metrics which can be tied to a standardised scale in order to create reliable national benchmarks. A key challenge for MATs until now has been to find a way to reliably compare academic performance across multiple schools outside of the national end-of-key-stage exams. Using standardised assessment data, our dashboards will allow them to do so for every year group, multiple times per year. Our recent press release explains our approach in more detail.

  2. Data exploration should be interactive and intuitive. We hate it when dashboards thrown out lots of information with no clear message. This is why when you load any of our dashboards they will initially show you the most meaningful metrics at the highest level of aggregation so as to give you the overall picture first. We structure our dashboards to make it easy to then drill down from this high-level summary into the more detailed picture.

  3. We ❤ percentiles. Comparison to the average is not enough. We do include averages in all of our dashboards, but simply knowing you’re above or below doesn’t tell you much about a school’s performance. There’s a big difference between being just above average and in the top 5% of schools in the country, for example, and it’s not possible to deduce this from the average alone as it depends on the size on the scale and the distribution of schools along it. Getting a KS1-2 maths progress score of 3 might sound like quite a small number, but it would put you in approximately the top 10% of schools (whereas a KS4 value added score of 1,003 would put you around about average). It is for this reason we use percentiles. A lot. And it’s thanks to standards-based comparison that it’s possible. It allows you to compare across measures in the same language, contextualising all aspects of your performance in the national context in a granular manner.

  4. Data should be contextualised in an intuitive manner. You shouldn’t have to read a load of numbers to know how you’re doing. So we have included explanations and colour coding to make it easier to interpret percentiles quickly and visually.

  5. We separate attainment and progress measures. By doing this we make it immediately obvious if there is a difference between the two. We prioritise progress, putting it on the left where your eyes are naturally drawn first. If you’re interested you can read our full argument in a previous blogpost.

This tool is only a glimpse of the dashboards we’re building.If you are interested in what we’re up to, feel free to drop us an email. We’re also keen for your thoughts and feedback, as ever.