What to consider when analysing your school data

At Assembly, our approach is centred around data as a tool for school improvement. We don't believe in data for data's sake; instead, we focus on how to make data valuable as an input into a school's decision-making process. We're particularly conscious that it can be tempting to jump to conclusions when analysing data dashboards, so in this article, we focus on things to bear in mind before you act upon your school data.

  1. Consider the school's context
    Every school has different socio-economic circumstances, demographics and levels of prior attainment. Schools may have a higher proportion of English second language learners, pupil premium children and children with special educational needs. All of these should be considered when looking at key performance indicators like attainment and progress.

  2. Check how your curriculum aligns with your assessments
    In an ideal world, assessments would map perfectly to the relevant curriculum. However, with standardised assessments, a school may find that they are teaching topics in a slightly different order from the one assumed by the associated assessment. Therefore, it's wise to check the extent to which the two align in advance of setting a test. Of course, an assessment should never drive curriculum choices, but mapping the two can be helpful in prompting schools to reflect on the order they're choosing to teach in. It needn't be the end of the world if the two do not align, but understanding where any gaps might be is an essential part of interpreting the results. This is particularly important when using standardised tests to compare between classes, year groups or schools.

  3. Bear in mind summer learning loss.
    There is research to support a summer learning loss in Maths and English skills (particularly in spelling and grammar) and it is most evident in primary aged children (Galton 1999). Spaced repetition and recall is widely documented as one of the most effective ways to embed new learning in long-term memory (Bjork 2010, Custers 2011), so a long gap without recall can lead to the 'forgetting of unrehearsed knowledge'. This follows a predictable pattern. With this in mind, it is not surprising to see a dip in assessment scores from summer to autumn terms, across the summer holiday. This effect is more pronounced in children from disadvantaged backgrounds (Galton 1999). This dip in learning can also be seen in children transferring from one school to another (Hargreaves and Galton 1999) due to adjusting to a new environment and often a new curriculum. The summer learning loss is also a factor in the transition from year 6 in to secondary school. In short, it might not be a cause for alarm if your autumn summative assessment data shows a level of performance slightly below the previous summer. One way of establishing what's happening is to dive into the detail of the assessments (for example, strand or question-level analysis). You may also find that you benefit from looking at year-on-year performance, if you used the same assessment in the previous year.

  4. Consider the reliability of the assessment
    There is no such thing as a perfectly reliable assessment, but some assessments are more reliable than others. For example, teacher assessed grades are inherently less reliable than those produced by a standardised assessment. Measures of progress garnered from teacher-assessed data are particularly vulnerable to reliability issues. Therefore, it's worth reflecting the likely reliability of any assessment you use before inferring too much from the results. At Assembly, we're finding that schools are increasingly keen on using standardised tests to measure progress and make comparisons between schools (for example, in a MAT). To quote primary data expert James Pembroke "MATs - and other organisations - have a choice: either use standardised assessment to compare schools or don't compare schools. In short, if you really want to compare things, make sure the things you're comparing are comparable".

    If you’re creating your own assessments, it can help enormously to agree on some common principles to be used by all teachers (and schools, if the policy covers a whole MAT). For example, does the assessment cover content for the term only, or for the whole curriculum to date? And if using teacher judgments, to what extent are judgments expected to be based on work in books, coursework, or assessments?

  5. Decide whether to focus on attainment, progress, or both.
    Where good data exists, it’s sensible to consider progress and attainment together. Attainment tells you how an individual performed in a given assessment in relation to a specific standard, and of course this helps to demonstrate how well pupils have understood a particular concept or collection of topics. Progress, however, gives a more nuanced view of how a pupil is improving their knowledge and skill over time. If you’re looking at progress, decide what that means.

    Demonstrating that pupils are making progress can be very difficult to do. Trying to define what represents good or expected progress is fraught with difficulty. The old measure of x number of levels or sub-levels per term or year was abandoned in June 2013, but no universally accepted replacement has emerged. That said, we should all be able to agree that to measure progress you need data for the start point and end point that is reliable and comparable. In our view, the best way to do this is with standardised assessments. Assembly’s dashboards typically measure the change between two standardised assessments. If a common scale is used across year groups, it becomes possible to analyse the change in performance over time.

    Of course, progress itself is subjective. Progress for one learner might be a consolidation of prior learning to acquire a deeper understanding, while for another it might be an increase in complexity and knowledge related to a topic, so it’s important not to overinterpret progress data. There is no 'one size fits all' measure that reflects the organic, non-linear nature of learning. Pupils develop at different rates in different subjects, and progress in maths may look very different to progress in English, especially for EAL pupils. That said, in aggregate, and if grounded in good quality assessments, progress data ultimately offers a good yardstick of the change in relative performance over time.

At Assembly, we strive to provide your school data in a way that will allow you to make data-driven school improvement decisions. Data should be a starting point for asking questions, never an end in itself.