Standardised testing – the results are in! - Inspired Education Services
Inspired Education Services is a Gold Coast dyslexia tutoring and consultancy service provider led by Sarah Mitchell which diagnoses and supports learners with dyslexia and other specific learning difficulties. Inspired Education Services also offers staff training and development opportunities for schools.
Dyslexia, education, Gold Coast, Australia, Special Learning, Behaviour
16672
single,single-post,postid-16672,single-format-standard,ajax_leftright,page_not_loaded,,qode-title-hidden,footer_responsive_adv,qode-theme-ver-10.0,wpb-js-composer js-comp-ver-4.12,vc_responsive
one-on-one

Standardised testing – the results are in!

The Australian government recently released a statement saying they are disappointed the hundreds of millions they’ve spent on school improvement initiatives, a key factor being standardised tests, has made virtually no impact on student achievement levels. “A plateau in performance is not good enough at a time when we’re putting record levels of funding into Australian schools, which has grown by some 23 per cent over the last three years.” Simon Birmingham, Minister for Education and Training, 3rd August 2016. This is a sad and shocking result, given such a significant increase in funding. So why is this the case?

There is a belief that we can measure teacher performance based on student achievement. This seems perfectly reasonable when put in such simple terms. Input should be reflected within outcomes. But like all good stories, student achievement is multifaceted, complex, and impacted by a huge range of issues. The types of skills we can measure on such tests are extremely limited, and test writing itself is far from a perfect process. We must also stop to understand that student achievement is not the same as student progress. A school which has a high level of English as an additional language students, who come from disadvantaged backgrounds, cannot be expected to reach the same levels as a child from a middle class, English speaking family. The backgrounds of these two groups of children are so vastly different, and their starting point when they entered school so widely varied (and tends to continue widening). But, teachers may be able to get those same disadvantaged students to make more progress than their peers and this is what should be applauded and rewarded. What we should be measuring is each, individual teacher’s ability to progress students well. That is all we can ask of them.

A friend of mine recently told of her slight embarrassment that her son, who was gifted at maths, was receiving additional maths tutoring to help push the school’s NAPLAN scores up. She rightly questioned why he was getting such support, when there were likely children who needed it much more. Such is the effect of grading schools based on exam results. Parents, family, and wider society are entitled to a compulsory education system that serves all the students who attend it, not just the average to above average students, to whom instruction is normally geared.

Another friend answered my objections about standardised testing ‘Yes but it gives us a baseline of achievement over time.’ No it doesn’t. As the saying goes, ‘There are lies – big lies, and then there’s statistics’. Each year group can only be compared with other students in their year group, because results are bell curved. For example, if a year group does particularly well, the bell curve is simply moved up so that the majority of students fall in the average range. Likewise, in a poorly performing year, the curve is moved down. So, while it is widely believed by society that the marks are reflective year on year, they really aren’t.

Test difficulty can vary widely too. Despite the best intentions and lengthy efforts of test designers, there are often huge differences in overall difficulty level. This is allowed for in the bell curving, however students will quickly lose confidence in a more difficult test (particularly students with specific learning difficulties), get tired, and miss questions they could have answered. This affects their overall score dramatically, placing learners with difficulties at an even higher disadvantage.

In most cases, as an experienced teacher, I can predict the score my students will achieve on these blanket tests before they take them. There is no specific feedback given on what the students achieved, therefore nothing more detailed than what I already knew can be gained from these tests. This is hardly surprising, given they are not designed to inform teaching. They are designed to check up on teachers. Us shifty, lazy people who, if not closely monitored will not do the job we’re being paid for. The problem is, they don’t even achieve this aim. Can we blame the year 2 teacher for the results that year? Or is it the year 1 teacher’s fault too, or the reception teacher? Or the ineffective teacher aide? Or the dysfunctional school? The head? The parents who did not read or speak to their child enough? Societal disadvantage? These tests do not even place accountability in targeted places, so it remains all too easy for every stakeholder to shrug their shoulders and say ‘what can I do?’

Much of school life is now spent teaching to the tests and taking practice tests. This places huge, unnecessary pressure on children, particularly children with specific learning difficulties and other disabilities, who already spend most of their school life in a state of anxiety. Teaching to the tests wastes precious learning time that could be filled with far more effective and engaging lessons and programs, that help drive student achievement forward much faster. There is more to English than can be measured on the NAPLAN.

I am not at all claiming data collection is not a useful tool for schools. It absolutely is. I love collecting and analysing my data as much as the next school leader – but it has to be good data, useful data, data that informs teaching programs and tells teachers something they didn’t already know. It allows teachers to deepen their understanding of the student’s specific strengths and needs in particular areas so that instruction ad goal setting can be more targeted. It allows them to deepen their understanding of their own strengths and needs.
So how do I collect data? Well, standardised tests, of course. But I want to draw a clear distinction here. Most of us are familiar with school wide standardised tests, such as NAPLAN. These test take hours to complete, and test a very broad range of academic skills all in the one test. For example, grammar, spelling, punctuation, vocabulary etc. The student is given an overall score, which is intended to reflect their achievement in English.

The tests used by professional assessors like myself are also standardised tests. ‘Standardised’ means that a large group of an appropriate selection of the population have been tested, and the results placed on a bell curve, so that those taking the test in the future can be compared to peers in the same age group, and whether they are low, average or high performing in these areas by comparison. The difference with the tests we use, is that they are highly specific to each skill set. This gives us benchmarks for certain cognitive skills and academic skills the student may have, and helps us to build a profile to determine how the student learns best, as well as their current academic attainment.

The time it takes to administer a test is a key factor in selection (the majority of the battery I use last around 5 – 10 minutes each). The types of tests we use cannot, and should not be practiced. They provide a clear ‘snapshot’ of where the student is at that time, and tell us a great deal about what and how we should be teaching our students. They are most often administered individually, giving the examiner a chance to observe the student and make recommendations for teaching. They also tell us a great deal about the specific skills teachers have been successful in teaching, and the skills they need more training and support to teach better. In other words, they tell us something new and specific about the students, and they tell us something new and specific about the teacher’s skill sets. Because they are quick and specific, it is possible to measure an individual teacher’s performance across a set time period, such as 6 monthly, which allows them time to further tweak the way they are teaching their students in their classes.

So how does this kind of testing look?

I am not suggesting every child undergoes a full diagnostic assessment, that would be a huge waste of time and money. But it is possible to have whole school, targeted standardised testing, with some careful test selection that obtains the necessary data without over testing or wasting time. For example, testing a child’s single word reading, vocabulary, and specific aspects of their reading comprehension can be tested quickly (within an hour) and provide information about where reading breakdown occurs, and what needs to be done to help the student (or teacher) improve. Testing working memory, processing speed and phonological awareness skills (the ability to isolate and manipulate speech sounds within words) can tell us how to alter the modalities we used to deliver information to suit the learning styles of our students. Using short, specific standardised tests gives far richer data than school wide, generalised tests based on subject areas. This method of testing is also much more efficient, and has a far greater chance of impacting teaching and learning.

The current school wide, standardised testing does not achieve its aim, how would it? So how can the government be surprised by the results? Letting teachers know you are monitoring them closely does little to change the end game for students. It just demoralises the teachers and makes them feel they are being treated like naughty children. Teaching is a fairly self-weeding profession. Students can sniff out an unmotivated, uncaring teacher from a distance, and will give them all kinds of grief. In order to keep our own stress levels low, teachers put huge amounts of time and preparation into making lessons interesting, knowing our subject content and managing students the best we can (unless standardised tests are coming up, and then we have the dreaded task of trying to force students through hours of dull exam practice). If we don’t provide well structured, exciting learning experiences, we suffer the consequences in the classroom. Most teachers do the best they know how on any given day, with the knowledge, resources and time available to them. We should be working on the assumption that they did the best they could, and if it falls short, we should find ways to help them to improve. In any case, can a largely multiple choice test really tell us whether we are preparing our students for the future in the 21st century? The vast majority of skills they will need cannot be measured in this way. Simply looking over teachers’ shoulders will not help, but informing, training and investing in them most definitely will.