Using NAPLAN responsibly
This week the media was full of articles regarding the annual NAPLAN tests. NAPLAN stands for National Assessment Program Literacy and Numeracy and is a series of tests undertaken annually across Australia for students in Years 3, 5, 7 and 9.
NAPLAN has been a highly contested space in the Australian schools system since its introduction in 2008 with proponents arguing that it provides an objective data set in order to shed light on the performance of schools; and opponents arguing that the political atmosphere surrounding NAPLAN places undue focus on a moment-in-time test leading to schools manipulating their programming to maximise performance rather than offering the rich curriculum that students need.
At the most extreme level there have been incidents of schools pushing aside regular classes to focus on daily practice tests and even breaches of test protocols in order to artificially improve a school’s standing. At the other end of the spectrum, there are schools who have conducted campaigns to discourage participation and who have requested that parents opt-out of the testing program.
Like many debates in our society, the controversies around NAPLAN tend to isolate us into polar positions that either condemn or celebrate the test. In my view, while it would be disadvantageous to overemphasise the significance of NAPLAN, any data available to us is useful data, if used judiciously to improve student performance and as a possible indicator for a student who may require closer focus.
NAPLAN is just one of the many datasets available to us as educators to help monitor individual, cohort and school performance. We subscribe to Allwell testing which we use to track our students’ progress. We also rely upon teaching teams cross-marking and sampling one another’s class’ assignments in order to ensure compatibility of assessments across a year level. Teacher observations of students in class and monitoring of their various tasks, activities and assessments, help to build a picture of a student’s capacity and performance. This helps guide teachers to adjust to ensure they maximise individual student outcomes.
Any utilisation of test data – whether it be from NAPLAN, Allwell or other sources – needs to come with the caveat that they are snapshots of moments in time and that there are many factors that can render a student’s performance anomalous to their actual ability. However, as a broad rule, at King David we use the multiple data sources available to us in a number of ways to responsibly analyse our performance and approach.
Firstly, we use it as one possible indicator of a student’s potential or performance. We feel it is responsible to use such data as a potential flag to indicate that if a student’s outcomes are surprising then we should follow up with further examination. Secondly, we can use this objective data at a class level – if a particular class’ responses are higher than another’s in response to a type of question, we can ask the teaching team to liaise to share best practice with each other. If there is a gap that seems evident across the cohort at large, this can be cause for introspection regarding the curriculum and whether we need to add time to particular aspects.
This represents a proportionate and sensible approach that makes use of whatever tools we have available to support our quest for continual improvement. What it avoids is an undue focus on the political atmosphere that can accompany high-stakes testing.
I hope that our community continues to have trust that our school’s strong academic performance is not a consequence of teaching to the test, but is instead a consequence of our continued focus on supporting our teachers to develop and use whatever tools we have available to nurture each individual student to achieve their best.
Shabbat Shalom,
Marc