Introduction
Three of our staff spent 6 months holding fortnightly meetings in a group called 'Easy Read: is there a better way?'. Our goal was to find ways of improving the quality of our Easy Read material. We began by acknowledging that Easy Read in its current form may not be going far enough, or performing well enough, to make information easy for people with learning disabilities to understand.
Our group had three stated aims:
To look at existing Easy Read guidance, to identify alternative styles and methods that we could adopt.
To assess our own Easy Read style and methods to see what we can do differently.
To test all our ideas with people with learning disabilities.
As our work progressed, so too did our aims. However, the core principles - to look at the ideas of other parties and to analyse our work - remained the same. In the following chapters of this report, we will discuss our progress on each of our aims.
Aim 1 - looking at existing Easy Read Guidance
There is no shortage of Easy Read Guidance available online, made by a variety of sources including charities, healthcare organisations and Accessible Information organisations. Much of the available guidance makes suggestions that align with our current practice. For instance, most guidance advised Easy Read producers to use: wide margins; left justified text; pictures to the left of writing; a large, clear, sans-serif font; numerals for numbers; short sentences no more than 15 to 20 words long; bullet points for lists and definitions for difficult words.
However, some guidance made suggestions that differed from our current ways of working. For instance, some guidance suggested:
Using real-life examples of weights - for example, 1 litre is about 3 cans of Coke.
Using section headings on every page in that section.
Cutting sentences that run over multiple lines at natural pause points in the sentence. For example:
The way this sentence is cut is easy to read. (good)
The way this sentence is cut is easy to read. (bad)
For page numbers, use 1 of 24; 5 of 16 and so on. This means people with a printout know they have all the pages.
Text be size 14-16, whereas we use size 18 text.
We also found that there is some disagreement over the use of bold words. It is not uncommon for readers to interpret bold words are 'important' rather than 'difficult'. Happily, most difficult words that make it into an Easy Read document are also important. Nevertheless, it is clear that there was some confusion over what exactly the role of bold words is.
Some guides recommend a page at the beginning of each document that suggests how a reader can read it - e.g. not all in one go, or with the help of a support worker. We felt that this could be a useful addition to our documents, and worked on developing an opening page that explained: that the document was Easy Read; what Easy Read is for; how a reader might use an Easy Read document; what bold words and links are.
In addition to the guidance created by organisations that create Easy Read material, there exists a corpus of academic research into Easy Read. Having completed our survey of online guidance, we undertook a form of literature review of this research, to better understand what we might learn from it. One aim was to find evidence for how different aspects of Easy Read help people with learning disabilities to better understand information.
Looking at the available research, some consistent themes began to emerge. Firstly, there was not very much evidence for the benefit of different elements of Easy Read information. For instance, while some research found that pictures helped clarify text, other papers suggested that images could overwhelm readers by causing 'memory overload'. However, some consistent themes emerged:
Easy Read tends to work best for people with mild or moderate learning disabilities.
Users of Easy Read tend to understand more when they have support to read it.
Many of the facets of guidance used in Easy Read Online and other producers of Easy Read information are not rooted in academically tested evidence. They may have been developed in conjunction with people with learning disabilities, but there is no way of knowing how rigorous this development process was.
Users of Easy Read consistently self-report positive feelings towards Easy Read options. This is perhaps at odds with the lack of evidence backing up its usefulness but nonetheless points to the real-world value associated with the format.
Unfortunately, many of our key questions were not satisfactorily answered by the research. When researchers provided examples of the Easy Read documents they used to test the format, the quality of these documents fell well short of what we would consider to be good Easy Read. Only the broadest elements of Easy Read guidance were tested (e.g. the inclusion or exclusion of pictures). And, as more than one researcher pointed, the overall corpus of research is severely lacking in depth.
It is reasonable to question and critique the usefulness of Easy Read. Despite Easy Read Online's obvious conflict of interest, we believe that Easy Read and other accessible formats are defined by their ability to impart complex information to people with learning disabilities, and if they are failing to do that then we must question their necessity.
However, we felt, when reading some studies, that the academics somewhat overplayed the belief on the part of Easy Read producers that it is a panacea and will result in a universal, immediate understanding of any and all information. We know this is not the case. We understand the clear fact that many people with learning disabilities will struggle to read Easy Read information without support. Nevertheless, we believe that it can still aid understanding, by foregrounding the most important information, cutting through intimidating walls of text and providing supporters with a simplified bridge to explain information to a person with a learning disability in a more personalised way.
Aim 2 - Assessing our own work
Our second aim was to look at our own work, to find ways of improving it. We all felt that our work had been steadily getting better over recent years, but we had no way of demonstrating this. Furthermore, we were aware that we occasionally put out work that is not up to the standard to which we aspire. In order to improve this work, we had to understand it.
One common problem we identify with our own work is length. We regularly produce lengthy consultations that, we worry, are so overwhelming that no person with a learning disability is likely to complete it. Since such consultations are inherently interactive, we agreed to contact clients to find out if these consultations ever got any responses. We were braced for disappointing results.
However, to our pleasant surprise, most clients who responded gave us positive feedback. Typically, Easy Read responses would account for approximately 5% of total consultation responses. When very few responses were received, the client usually would provide an explanation that was out of our hands (i.e. that related to the delivery, not the content of the consultation document). Interestingly, there did not appear to be much correlation between the number of respondents and the length of the consultation.
These results should be presented with some caveats. We only spoke to clients who were happy to speak to us - perhaps if the client had received next to no responses, they might have been embarrassed to tell us. Nevertheless, this investigation was heartening.
We also identified the possibility of using scoring systems for text. Many writers, in many contexts, use scoring systems to analyse the complexity of a piece of text. It is common, for instance, in advertising and copywriting. The most common scoring system for text is the Flesch-Kincaid score, which gives the complexity of the text on a scale of 1 to 12, corresponding to American school 'grades'.
One set of guidance set the ideal Flesch-Kincaid score for Easy Read as 3.78. We began comparing our work to this score. Typically, our work had higher Flesch-Kincaid scores than this, averaging around 7. However, we became frustrated by the drawbacks of the score. It did not account, for instance, for the length of a piece of text. Indeed, it did not account for so many variables that we think about when creating Easy Read that we were unsure if it would be much use trying to bring our scores down.
Where the score did lead us, was to create a formula for a different score, one that would better represent the different factors underpinning good Easy Read. We developed a new system, called the Easy Readability Score, that would look at these factors and create a score, ranging from 0 to over 100, that described the quality of Easy Read. It would look at the number of pages, number of words, number of sentences, and the Flesch-Kincaid score. We found that this score was an excellent analytical tool to examine the quality of our work.
We made some changes to our documents based on trying to reduce the Easy Readability Score of our documents. It helped identify longer and more complex words to which we do not always pay attention. For instance, we no longer use 'Contents', 'Introduction' or 'For more information' as regular headings, instead using 'What is in this booklet', 'About this booklet' and 'Find out more' respectively. The score allows the process of simplifying information to become almost 'gamified', with producers encouraged to try and improve their score. This process may allow the simplification of information to become an ongoing process, requiring fewer working groups to provoke it.
Next steps
An issue that we discussed early in this process was that of standards in Easy Read. What links our various enquiries is the lack of common standards governing the quality of Easy Read material. As it stands, anyone can produce a document, slap an 'Easy Read' label on it and charge for the service, regardless of its quality. Competition between providers should drive up standards, but because our direct clients are not people with learning disabilities, we are not incentivised to put them first. Our frustration at this state of affairs underpinned all of our research, from looking at guidance and research to see if our work was up to scratch, to inventing scoring systems to check our work and the work of others.
In our final meeting, we discussed the possibility of collaborating with other Easy Read producers to discuss these issues and others. We talked about hosting or supporting a conference, in which creators came together and, perhaps, agreed on standards. Such standards could be backed up by research. If every organisation agreed to them, then a baseline of quality could be assured. This would allow us to do more to put people with learning disabilities back at the centre of what we do.
This report has not discussed the Easy Readability Score in great detail, as it is an idea that we are committed to taking further. We will be reaching out to academics to look at the possibility of conducting more research, to expand the body of evidence surrounding Easy Read. The Score could become part of future standards, or it could be used internally. Either way, it will be a useful tool in measuring the quality of Easy Read work that we produce.