It’s time to tell the Emperor that he’s naked…
by David Price
Ask any teacher, anywhere in the world, what is the most frequently asked question they get asked by their students and they’ll almost certainly reply, “Why do we have to learn this?” The teacher then responds with a mildly-exasperated sigh. Personally I had no problem during my fifteen-year teaching career with my students asking this. If they’ve given up over a third of their waking hours to be in your company I figure they deserve an answer.
However, the good news for exasperated teachers is that “Why do we have to learn this?” may now have been knocked off the top spot. The bad news is that its successor tends to provoke even more exasperation. When assigning tasks, today’s teacher, in most developed countries, is now likely to be met with “Does it count?”
Loosely translated, this means “Are there marks for this coursework?”, or “Is it going to be on the test, because if it’s not, I’m not doing it.” This isn’t what most educators came into teaching for. But, once again, I don’t think we can blame the students.
I was once in a bar in London. The young barman turned out to be a post-graduate, and I asked him what else he did to earn a living. “I write degree dissertations for Chinese students – five hundred pounds for five thousand words.” Trying to stifle my holier-than-thou-ness, I mumbled, “Tell me, do you not feel, well, morally compromised by doing that?” His response was unforgettable. He shrugged, and said: “It’s the end-result of a market-driven system.”
And that’s why we can’t get too upset by our classroom students’ question – they’re merely the end-result of a market-driven system. As I write, the big news story on TV concerns the uncovering of the practice of excluding students at St Olave’s Grammar School in South-East London (founded 1571), because their interim tests predicted they would fail to get a B, or above, in their A-level examinations. The school has since relented, but, according to the Times Educational Supplement, around twenty thousand students per year quit before taking their terminal exams. It’s impossible to say how many of them left voluntarily, or were pushed out, but you don’t have to look very far to see the driving force behind the St Olave’s exclusions. Their website homepage boasts:
“In a record year at St Olave’s Grammar School, students achieved a stunning 96% A*/B grades. A total of 75% of all grades were at A*/A, 3 percentage points up on last year’s. 32 students gained straight A* grades in at least 3 subjects. We did this by kicking out the dross that would have besmirched our reputation by getting a ‘C’ in their A-levels.”
OK, so I made that last sentence up. But whichever way you look at it, it’s pretty reprehensible behaviour. However, the pressure placed on schools by the publication of school league tables is potentially corrupting – whether you are St Olave’s, and especially if you are an inner-city high school, dealing with a host of social problems in a ‘no-excuses’ culture. Is it so surprising that some feel obliged to game the system?
Like the London barman who helped Chinese students buy their degrees, St Olave’s and the rest are simply proving that W.E.Deming, the management guru, was right when he said “It’s human nature – give me a target, and I’ll find a way to hit it.”
The frequently unasked question, however, burrowing at the heart of the ‘Does it count?’ dilemma is this: Does our apparent obsession with standardised testing count? Let’s take a look at this from a number of perspectives:
Does it count towards improving our education system?
At first glance the answer to this question would be an obvious ‘yes’. I mean, how else are we going to know if any given instructional strategy works, except by rigorously gathering evidence of its impact? The ‘datafication of education’ is rampant. While working in Australia, I was shown a report card for a student at a school in South-East England. It was simply a print-out of the mass of data collected on the student, including interim test scores, targets and predicted grades, for each subject studied. No comments. No data on the student’s contribution to class discussions or well-being. Just a bunch of letters and numbers. The horrified Australian teacher was sharing it with colleagues as a warning that they could be heading down the same data-obsessed road.
Of course, data is important. It can inform changes of practice and policy. But a slavish adherence to data can be depersonalising, deskilling and destabilising for parents, teachers and students. But perhaps these are sacrifices worth making if the end goal is unambiguous proof of effective teaching practice?
In the US, UK and Australia, various versions of ‘What Works’ are being touted as the key to objective ‘evidence-based improvement’. I’m not arguing that any skilled practitioner shouldn’t be gathering data on their student’s progress, but as we’ve already seen, when that evidence is gathered under high-stakes conditions, it can be subject to distortion. And when it’s the only evidence gathered, as is the case in most educational trials, then things can get dangerously prescriptive. In the overwhelming majority of trials, evaluations and pilot initiatives, the sole yardstick is – did the student’s test scores improve? This is an overly-narrow arbiter of success and, as has been pointed out by Prof. Yong Zhao and others, ignores the side-effects of any given intervention. We wouldn’t approve a cancer treatment, however successful if the patient suffered a total loss of appetite and subsequent starvation. So, why do we approve literacy interventions without checking to see if the side effects include, say, the student’s desire to read?
Unless we want to regard kids as a set of automated widgets in a factory, shouldn’t we be coming to conclusions on what works by drawing on a far wider set of indicators?
Does it count towards national prosperity?
Categorically not. The PISA international assessments of performance in Literacy, Numeracy and Science have become the holy grail of evidence, when in fact they are the Emperor’s New Clothes of education. In fairness, the OECD, who administer the tests, never intended for them to be the triennial judgement on whether we’re all going to hell in a handcart – it’s just politicians and journalists that have turned them into the ultimate high-stakes test. National education strategies all over the world are formulated with the desired intent to ‘make our nation globally competitive’, invariably citing run-of-the-mill performances in PISA league tables as warning signs that, in the race-to-the-top, we are falling behind.
The ridiculous over-simplification of these pronouncements can be exposed by looking at just one country’s correlation with PISA over a range of measures: America. Out of more than 65 countries assessed by PISA, the United States has consistently ranked mid-table. Could do better. Cue Secretaries of State for Education in the White House, over nearly 20 years, calling for more standardised testing to improve student performance (which is akin to growing healthy plants by pulling them up on a regular basis to see how they’re doing). So, let’s see how it’s affecting their national performances:
Institute for Management Development’s Index of Global Competitiveness 1996-2015 #1: USA
Thomson Reuters Analysis of Scientific Papers Published 2001-2011 #1: USA
Number of Mathematics Papers published #1: USA
Global Creativity Index 2015 #2: USA
Innovation Index (as judged by patents produced) #1: USA
Global Entrepreneurship Index #1: USA
(Note: the target populations for most of the above were adults aged 25 to 35. PISA performance of this age cohort during 2000-2009 – when they were 15 year-olds – for reading, maths and science, have been on, or close to, OECD mean scores. Mid-table obscurity, in other words).
‘PISA Hysteria’ isn’t based upon any sensible correlation between a country’s ranking and a range of prosperity measures. So, why does it matter? Because the drive behind more standardised testing, across a range of countries, is fuelled, primarily, by official responses to PISA results. Roll on the day when an Education Secretary of State responds to PISA by saying “We’ve looked at the data and decided that it doesn’t really tell us anything, so we’ll keep doing what we think is best for the well-being and future prospects of our children.” Maybe the Finns already did it. Here in the UK, if the education strategies of successive governments have been to significantly improve our performance in global rankings, then we have succeeded in making our kids miserable (as Madeleine Holt and John Rees will attest in the following chapters), but failed miserably in the government’s stated objective. Lose-lose. Rather than accept that the ‘exam factories’ that our schools have become, doesn’t work, for anyone, some government ministers and, shamefully, some educators, blame the students, labelling them ‘generation snowflake’.
Does it count for our children’s life chances?
Apparently not. Despite its seemingly counter-intuitive nature, test scores do not indicate who’s going to succeed in getting into university, or into well-paid employment. Research suggests a range of significantly better indicators: the obvious one of economic status; levels of self-belief; the ability to build relationships and networks; resilience…all better at predicting future success than how well they did in exams.
This collective self-delusion – that performance in academic tests predicts future success in life – is at the heart of what Guy Claxton described at the start of this book: swathes of children feeling inadequate as a result of the false elevation of the intellectual over the practical. The consequence of this denigration of vocational skills, from successive governments, couldn’t be more ironic: the very jobs that are hard for machines to replace are of the ‘non-routine manual’ variety (electricians, plasterers, plumbers). Yet our current system is churning out students who can do ‘routine cognitive’ tasks (office admin workers who process information) that robots can do miles better than humans.
We all want our children to be secure later in life. So, what does indicate their future life chances? During the past couple of years a number of studies point to two highly correlated indicators: reading for pleasure and student engagement. Numerous studies have linked the so-called ‘reading quotient’ (the amount of leisure reading a child is engaged in) with academic performance and career/college readiness. Equally, a twenty year longitudinal study found that children’s interest and engagement in school influences their prospects of educational and occupational success 20 years later, over and above their academic attainment and socioeconomic background. In case you missed that, it found that exams were a poor proxy for kid’s life chances, and that an engaged child from a low socio-economic group would fare better, twenty years on, than a disengaged child from a middle-class background.
In the context of seemingly ever-widening social inequality, doesn’t it make sense to focus our energies upon the things that we currently don’t measure but that clearly make a difference to kids’ long-term future prospects, like reading for pleasure and being engaged in school? Ah, but there’s the rub – long-term. Everything about our education system has to be judged in the short-term. Education ministers get four to five years (if they’re lucky) to make their mark and be judged accordingly; schools are judged by their ability to push their students one more rung up the ladder (high-school or college), and colleges are judged by short-term employability rates. How different would school’s priorities look if they were judged by their student’s life prospects 10, 15 or even 20 years after they left, rather than last year’s exam results?
Does it count for employers and colleges?
Currently, the only truthful answer is yes. Exam grades still open or close doors. But talk to any college admissions officer, or any head of human resources, and they’ll tell you that it’s a question of convenience, not preference. Faced with thousands of applicants, some filters have to be applied – but no-one is very happy with the current system of grade cut-off. That dissatisfaction is only going to grow, as the costs of making the wrong selections outweighs the convenience.
Some of the world’s biggest corporations, including Google, Ernst & Young, Apple, Costco, IBM, are no longer interested in whether the applicant has a degree, arguing that it’s not what you know that matters, but what you can do with what you know. So, the jobs of the future will increasingly ask for an applicant’s portfolio, or their networks, or their LinkedIn recommendations, rather than a qualification. Companies like Entelo help companies like Cisco, Sony, Netflix, United Airlines and Tesla, overcome this dissatisfaction with the traditional CV/degree selection process, by using incredibly sophisticated software that ‘mines every social network on the internet to identify hundreds of millions of potential candidates, then uses predictive analytics to identify the best fit according to criteria set by the client.
When the number of companies using big data, rather than student grades, to identify talent reaches a tipping point, then the whole edifice of standardised testing ceases to have relevance. Employers no longer rely on qualifications, so colleges and schools have to re-think assessment requirements. The ability to recall and regurgitate in a timed exam disappears – acquiring skills, learning dispositions and building a portfolio replaces test prep. Some visionary schools and universities already do this, but they’re still the beautiful exceptions. In time, it will become the norm.
Does it count for schools and teachers?
Absolutely, with no imminent relaxation in high-stakes accountability on the horizon, it takes a brave school leader to speak out. One who did was Rachel Tomlinson, the headteacher of Barrowford Primary School in Lancashire, England. Writing to parents following their Standard Attainment Tests, Rachel praised the children’s efforts, but made it clear that the school did not see their scores as representative of their true talents:
“We are concerned that these tests do not always assess all of what it is that make each of you special and unique. The people who create these tests and score them do not know each of you – the way your teachers do, the way I hope to, and certainly not the way your families do. They do not know that many of you speak two languages. They do not know that you can play a musical instrument or that you can dance or paint a picture.”
The letter was picked up by Twitter and went viral. Rachel, astonished by the reaction, soon found out that public disagreement with ‘what counts’ has consequences. Within a year, the schools inspections agency OFSTED had visited the school, and judged it ‘inadequate’, commenting that “The headteacher’s leadership has emphasised developing pupils’ emotional and social well-being more than the attainment of high standards.” Curiously, the school’s previous inspection, in 2012, judged the school ‘good’ – the same conclusion as the most recent inspection, in 2016. Good in 2012. Good in 2016. Inadequate in 2015, the year after speaking out against high stakes testing. Coincidental?
Another leader that has tried to balance the development of future-focused skills with the need to achieve acceptable standardised test-scores, is Mark Moorhouse, headteacher of Matthew Moss High School (MMHS) in Rochdale, England. Serving a neighbourhood of high diversity and low socioeconomic status, the school has prioritised the development of self-directed learning skills. An independent report from the University of Bristol, on how their students fared beyond school, found that such a focus meant that MMHS students who went on to further study ‘performed at a higher level than comparable cohorts of students from other schools’, and that ‘this capability stays with them in their onward destinations in formal education’. This commitment to students’ long-term prospects gains no credit and shows up on no report cards. But schools like Matthew Moss do it anyway – because it’s the right thing to do. And if the system could end its fixation with such a narrow set of measures, as Mark Moorhouse argues, we’d liberate our schools and our young people:
“It is entirely possible for schools to deliver academic excellence within a developmental experience which equips young people to thrive in the shifting world of the 21st century. And with a green light from those in power, they would, producing huge social, economic and personal benefit.”
With more UK teachers leaving the profession than joining it, there’s an urgent need to ask why it’s become such an undesirable occupation. A Guardian survey last year found that 43% of teachers in England’s state schools were planning on leaving within the next five years. So, a crisis looms. When asked why they’re leaving, teachers almost always cite workload as the primary reason. But it’s not simply the amount of work. It’s what they see as the pointlessness of the data-collection, target setting, form-filling. All of it created to ensure that OFSTED inspections are passed, evidence can be produced, results can be defended, and students can be adequately prepared for tests that increasingly meet no needs, other than politicians needing to ‘make schools accountable’.
Zoe Brown, one of those who left the profession in 2016, told The Independent newspaper: “In some ways I don’t feel like a teacher at all anymore. I prepare children for tests and, if I’m honest, I do it quite well. It’s not something I’m particularly proud of, as it’s not as if I have provided my class with any transferable, real-life skills during the process. They’ve not enjoyed it, I’ve not enjoyed it, but we’ve done it: one thing my children know how to do is answer test questions.”
Does it count towards a future-ready society?
The point of intense frustration for many is that it doesn’t have to be like this. As we’ve seen, technology now enables employers to harness the incredible power and sophistication of social and predictive analytics to hire just the right employee, instead of the one with the best grades, or resume. Those same technologies could be used by schools to provide a complete, individualised picture of a student’s growth – academic, vocational, creative, personal and social, among many other datasets – instead of pigeonholing them by their grades. In no way should this make schools any less accountable. Indeed, their accountability could only widen as a broader range of stakeholders have a profile of the student’s progress. Parents, potential employers, college admissions staff and not least students themselves, would all have the information they need to have rich conversations about students’ unique talents, their ability to work in teams, what they’ve made and the networks they’ve built, their resilience when handling setbacks, their commitment to learning outside, and beyond, school – and their capacity, as Valerie Hannon said earlier, to thrive.
Some of the following essays will argue that the impact of testing is damaging to the well-being and mental health of our children. I’ve tried to argue that what we’re measuring increasingly counts for less and less. That it tells us next-to-nothing about: what employers are looking for; where students’ talents lie; their long-term prospects; how effective schools are in growing them as engaged, culturally aware citizens; our national prosperity, and their readiness for the workplace, or further study. The high stakes accountability attached to what we’re measuring is a corrosive, corrupting force that is ‘de-moral-ising’ too many educators into gaming the system, simply to survive.
Yet still we persist. The emperor has no clothes, and it’s time someone told him. Does it count? You bet it does.
David Price is an author, speaker, consultant and trainer. He specialises in helping organisations – schools, colleges, as well as commercial organisations – prepare for a complex future. His book, OPEN: How We’ll Work, Live and Learn In The Future has been an Amazon bestseller since its publication in 2013. In 2009 he was awarded the O.B.E. by Her Majesty the Queen, for services to education. He works in several countries, mainly the UK, Australia, USA and Europe, training teachers, working strategically with education departments, and speaking at international conferences. He is a board member for the Canadian Educators Association, and VEGA Schools in India.