There are few things more disheartening in my work life than having to spend precious time unpicking and rebutting the destructive work of high status academics in elite institutions in the hope that it won’t undo years of hard-won progress toward better reading instruction and outcomes.
By Jennifer Buckingham
The latest example is a paper by Professor Dominic Wyse and Professor Alice Bradbury. Wyse and Bradbury are from the Institute of Education, University College London. Wyse and Bradbury have written a paper called ‘Reading wars or reading reconciliation: A critical examination of robust research’, published in Review of Education (2021) and described in a report in The Guardian as a ‘landmark study’.
It is not a landmark study. It’s groundhog day – another paper in a long line of studies and reports that try to prove that synthetic phonics is ineffective.
This is not the first time that I have written about work of a questionable standard from UCL’s Institute of Education (IoE). In 2019, researchers from the IoE published a study purporting to show extremely large, long-term benefits of participation in Reading Recovery. In reality, the study deliberately excluded an entire inconvenient group of students whose results undermined this conclusion, without declaring this omission of data in the published reports. When the methodological parlour trick was revealed, the people involved did not deny it was the truth. What happened to them and the report in the aftermath? Nothing. Everyone just carried on like it had never happened and Reading Recovery carries on unscathed.
It is therefore with a sense of resignation that I am going to nevertheless go to the effort of pointing out the critical problems with Wyse and Bradbury (2021). A number of others (Greg Ashman, Julia Carroll, Kathy Rastle, Michael Tidd, Rhona Johnston) have also written excellent critiques that pick up similar issues as well as others.
These are the main flaws in Wyse and Bradbury (2021) as I see them.
One: The selective review of literature
First, it is hard to imagine how the authors can justify not referring to these highly relevant papers:
There are probably some others that I have temporarily forgotten, but these three outstanding papers are directly relevant to the topic of Wyse and Bradbury’s paper.
Stainthorp (2020) is literally about the impact of literacy policies in England over the time period in question. It is published in the same issue of the same journal as another paper cited by Wyse and Bradbury (Solity, 2020 – which is also very good, by the way). However, Stainthorp (2020), Machin et al. (2018) and Double et al. (2019) all come to the conclusion that synthetic phonics has had an overall positive impact on reading outcomes in England.
To add insult to injury, Wyse and Bradbury give great credence to the work of Jeffrey Bowers, whose position on phonics instruction is in complete opposition to the rest of the scientific reading research community, and who admits he “is not so familiar with PA [phonemic awareness] research or practice”. Bowers is not Galileo; he just gets it wrong on phonics. Wyse and Bradbury mention the critique of Bowers’ work by Fletcher et al. (2020) but disregard it. I also wrote an article published in the same journal as Stainthorp (2020). You guessed it: I came to the conclusion that the evidence supports systematic, synthetic phonics.
Second, the selection of studies for the ‘systematic qualitative meta-synthesis’ needs to be brought to light. The studies deemed worthy of providing useful evidence about synthetic phonics came down to just eight in the final selection. The studies were drawn only from reviews by Bowers (2020) and/or Torgerson (2019), putting a lot of faith in these authors. Wyse and Bradbury further refined the list by excluding any study that did not include a measure of reading comprehension. Their rationale is that the ultimate goal of reading instruction is comprehension so it is the only measure worth knowing. However, this ignores two important points: distal measures will always be weaker than proximal measures. Yes, if students can decode, they are more likely to be able to comprehend but there are other factors that mediate the relationship and these variables are often omitted in analyses. In addition, reading comprehension measures are enormously variable and unreliable, especially among young children. Depending entirely on reading comprehension measures is not a sound decision but, even so, many studies of reading programs that include phonics find improvements in comprehension.
Due to the very narrow (and, dare I say, not very systematic) method of selecting studies to review, one of the most important, and certainly most influential, studies of synthetic phonics instruction was left out – the ‘Clackmannanshire’ study in Scotland. It meets all the criteria set by Wyse and Bradbury: “longitudinal design, sample of typically developing, readers, and reading comprehension measure” (p. 30). You guessed it again: the Clackmannanshire study found resounding positive results in favour of synthetic phonics instruction.
Two: The inconsistencies in the arguments
It is naïve to think that if something is in a national education policy document, that is what all teachers do.
Policy does not equal practice. We know this from the Year 1 Phonics Screening Check. Despite synthetic phonics having been in the literacy policy since 2007, in the first national implementation of the Year 1 Phonics Check in 2012, only 58% of students achieved the expected score. In subsequent years, when more teachers actually started teaching phonics effectively, the percentages of children achieving at or above the benchmark Year 1 phonics score increased steadily.
Wyse and Bradbury’s own survey proves that policy does not equal practice. Even though synthetic phonics is mandated policy, and the Wyse and Bradbury paper seems to make the case that synthetic phonics is the scourge of English society, only 66% of Reception and Year 1 teachers said that synthetic phonics is the main approach they use to teach phonics.
The paper says the 634 survey participants were recruited “via the network of affiliates of the authors’ research centre, and the networks of the affiliates, and via social media” (p. 31) but doesn’t attempt to demonstrate that they are a representative sample, so it is hard to know how much confidence to put in these findings, but the fact remains that Wyse and Bradbury’s own data do not support their contention.
Further weakening the findings, Wyse and Bradbury change the survey question in their conclusions to be all-encompassing. In the body of the paper, the survey question is given as “How would you describe your main approach to teaching phonics?”. In the conclusion, they state that “The findings from the survey reported in this paper showed that synthetic phonics first and foremost is the dominant approach to teaching reading in England”. (My emphasis.) If one in three teachers say they are not even using synthetic phonics as their main approach to teaching phonics, it’s a giant leap to say it’s the dominant approach to teaching reading.
Three: They don’t seem to know what synthetic phonics is
There are numerous points throughout Wyse and Bradbury (2021) where I could take issue with the characterisation of synthetic phonics. Skipping to the point, the main problem is that they don’t acknowledge that it has never been advocated anywhere, in any policy document, or in any report or research paper, that synthetic phonics should be done in a meaning vacuum. Everyone who advocates for the use of synthetic phonics based on scientific research takes great pains to emphasise this.
The Rose report, which kickstarted the synthetic phonics implementation in England, could not have been clearer, saying:
In sum, distinguishing the key features associated with word recognition and focusing upon what this means for the teaching of phonic work does not diminish the equal, and eventually greater, importance of developing language comprehension. This is because phonic work should be time limited, whereas work on comprehension continues throughout life. Language comprehension, developed, for example, through discourse and a wide range of good fiction and non-fiction, discussing characters, story content, and interesting events, is wholly compatible with and dependent upon introducing a systematic programme of high quality phonic work. (Rose, 2006, p. 39)
Sir Jim Rose, with the patience and civility of a saint, has repeated and expanded on this in various eloquent ways on countless occasions.
Yet, throughout the paper, synthetic phonics is portrayed as being about something other than reading, as though being able to accurately read words gets in the way of real reading. Elsewhere in the paper, though, Wyse and Bradbury say, “there remains no doubt that phonics teaching in general is one important component in the teaching of reading” (p. 41), but confusingly “the research certainly does not suggest the complete exclusion of whole language teaching”.
They seem to think that these two approaches are reconcilable, whereas phonics instruction is anathema to the philosophy and practice of whole language. Whole language does not mean including a variety of texts and literature in reading instruction. Everyone agrees that is good. Whole language is an ideology and philosophy that unambiguously eschews explicit teaching of the alphabetic code. You can’t just take a little from Column A and a little from Column B call it ‘contextualised teaching of reading’ and claim that it’s evidence-based (p. 42). That’s the sort of thing that has led to our current rates of entrenched illiteracy.
Perhaps the strongest indication that Wyse and Bradbury don’t have a good understanding of synthetic phonics is the way they describe the intervention used in studies by Vadasy and Sanders (2012):
Students assigned to treatment received individual systematic and explicit phonics tutoring instruction in English, which included letter-sound correspondences, phonemic decoding, spelling, and assisted oral reading practice in decodable texts. … In a typical tutoring session, paraeducators spent 20 min on phonics activities and 10 min scaffolding students’ oral reading practice in decodable texts. (Vadasy and Sanders, 2012, p. 990)
This description of instruction is straight-down-the-line synthetic phonics. However, according to Wyse and Bradbury, “These interventions are best described as balanced instruction orientation” (p. 36). This misconstrual of what is the central plank of the paper inserts a big crack in its credibility.
Four: The muddled analysis of international assessments and curricula
A few key points:
Comparisons of PISA and PIRLS rankings are meaningless. The number of countries participating in these assessments change with each cycle, so a country’s ranking can theoretically go down even if its scores stay the same or even improve. Nonetheless, research by Double et al. (2019) (not cited in the Wyse and Bradbury paper) found that performance on the Year 1 Phonics Check is a strong predictor of PIRLS performance.
Attempts to draw a straight line between the introduction of early reading policies and national average scores on international assessments are inevitably tenuous. Wyse and Bradbury admit that there are positive correlations between PIRLS performance and periods in which there was a policy emphasis on phonics (p. 25). But they argue that PISA is a more valid source for their purposes because it has a longer time span, which is debatable. Phonics instruction policies affecting Reception and Year 1 will only have a discernible flow-on effect to PISA scores ten years later if a) phonics instruction is high quality, and b) the broader program of literacy teaching both in Reception and Year 1, and in subsequent years, is also of high quality. Good synthetic phonics instruction will get more children out of the blocks than would have been the case otherwise (in Kareem Weaver’s great metaphor) but it can’t guarantee they’ll finish the race, especially if its a marathon. Even if we did think PISA scores at age 15 were a fair test of synthetic phonics instruction at age 5, we would have to wait until at least PISA 2024 because that will be the first cohort of students who performed well in the Year 1 Phonics Check, and who we can more reasonably assume have benefited from good synthetic phonics instruction.
Wyse and Bradbury provide inconsistent interpretations of the research. In the discussion and conclusions of the paper, they say: “Our analyses of the PISA data suggest that teaching reading in England has been less successful since the introduction of more emphasis on synthetic phonics”
(p. 43), but in the body of the paper they state “The PISA assessments and their reports provide an important international context for the reading debates, and a wealth of data for further analyses and, as we have shown, some correlations suggest an advantage for whole language orientation to the teaching of reading, but in the end they are not a sufficient way of determining which approaches to the teaching of phonics and reading are most effective in a curriculum” (my emphasis) (p. 28). Which is it?
Trivial but irksome mistake: “Australia has not reported state level outcomes in PISA or PIRLS.” (p. 13). Not true: see results from PISA 2018 and PIRLS 2016.
For a much better analysis of the relationship between phonics instruction and England’s national and international test scores, see Stainthorp (2020), some of which is summarised here if you can’t access it. See also the insightful policy analysis by Tim Mills.
Hopefully, the Wyse and Bradley paper will not cause too much damage and disruption to the growing adoption of synthetic phonics as part of evidence-based reading instruction that is leading to better reading outcomes in England, Australia and elsewhere.
Dr Jennifer Buckingham
[@buckingham_j on Twitter] is Director of Strategy and Senior Research Fellow at MultiLit.
This article appeared in the Aug 2022 edition of Nomanis.