Search
Close this search box.

The Problem with Cambridge English Exams

General-View (eflmagazine.com) indicate a candidate’s competency in the contexts in which they need to use the language. Nevertheless, general proficiency is reflected more or less accurately by different types of assessment, which make a difference both to the outcome of a candidate’s exams and to the lives of students who must study for them.

Cambridge English claims that its exams focus on real-life communication skills. I argue that no part of the exam – not even the ostensibly communicative parts – actually relates to such skills. If anything, the Cambridge exams are strikingly uncommunicative in their format and content.

My comments refer to B2 First and C1 Advanced, which largely overlap in format, but many could also be applied to C2 Proficiency.

Reading and Use of English

It is reasonable to expect that if you are going to be tested on your knowledge of some specific area, there will be a syllabus indicating what might come up. For Parts 1-4 of the Reading and Use of English test, Cambridge does not provide any such indication. At B2 and C1 level the the range of lexis and grammar may be too vast for this to be possible, since any item of language associated with the relevant CEFR level could come up. In practice, this means that learners have no way of preparing for it. Even if they did, the questions sometimes require the learner to make distinctions that not even native speakers would be able to make. Take this example from Part 1 of a B2 practice test:1

The Problem with Cambridge English Exams

For question 1, the answer is C. This is obvious to anyone who knows the collocation. But the lightest or the weakest sound are unobjectionable. The meaning is clear, even if they are less natural than the slightest sound. For question 2 the answer is B, revealed. But the difference between revealed and uncovered in this context is barely detectable. And exposed would only sound slightly unnatural. Wrong answers in Part 1 are usually so because they are not idiomatic; in other words, these questions are focussed on form, not meaning. So in addition to being impossible to prepare for, the questions here do not assess one’s ability to communicate – they assess accuracy.

Part 2 (open cloze) tests knowledge of grammar – things like conjunctions, determiners and pronouns – and lexicogrammar – such as phrasal verbs and linkers. This is useful language, obviously. But being able to recall bits of it for a gap-fill isn’t equivalent to knowing how to interpret it when reading, nor how to use it in one’s own writing. So Part 2 is a test of the ability to conjure up certain words from contextual cues – a skill almost never required in real-life communication. Part 3 (word formation) has a similar problem: learners might, in a different and more meaningful context, be capable of both interpreting and producing the elicited words, but not have the metalinguistic knowledge to make connections between the missing words and their ‘stems’ as the task requires. So here again, the test is not testing communication skills – it is testing explicit language knowledge.

Part 4 (key word transformation) is even worse in this regard than Part 2. Again, instead of testing ability to use language in a meaningful context, it merely tests whether students can think up a phrase based on a prompt. Often the phrase or the way it is prompted is rather obscure – as in this example:2

The Problem with Cambridge English Exams

It’s like a crossword puzzle: fun as a game, but horrible as a high-stakes exam.

Parts 5-8 for C1 (or 5-7 for B2) involve reading longer passages, so one might hope that there would be a greater focus on meaning in this part of the exam. Yet in fact, many of the questions merely test the ability to find specific information in a text3 – which amounts to identifying parallel expressions and not much else. This sort of question is common in the IELTS reading test too, but at least IELTS can require one to draw subtler inferences about the texts.4

[su_pullquote]There is no imaginable real-life context in which one might be expected to do anything similar to this. It is not communicative.[/su_pullquote]

Part 7 (gapped text – part 6 in B2) is another ‘word game’ kind of task: getting the knack for solving such puzzles might be enjoyable, but that is not equivalent to being able to do anything in English. One could read, appreciate and follow the story of the text in an ordinary context, yet still be unable to perform the task. This is because the way one reads while trying to perform the task is completely different from real-life reading. One is not reading out of curiosity, nor to follow an argument, nor to get information, but instead scanning for clues about how the text might be structured. It tests understanding of text cohesion. Even if this is a necessary part of reading proficiency, the test requires one to make such understanding explicit by learning techniques for completing the task. So Part 7 is a test of exam technique, not communicative reading skills.

Speaking

To some extent there is no avoiding the fact that speaking tests are uncomfortable. It is not easy to have a normal conversation in which one freely shares one’s ideas while also knowing that one is being tested. But the Cambridge exams take this inherent awkwardness and go out of their way to make sure it’s felt by everyone involved. The main way they do this is by having two students interviewed by one examiner. In Part 2, each candidate comments on a set of pictures – normally of people engaged in everyday activities – and then answers a general question related to the other candidate’s pictures. There is no imaginable real-life context in which one might be expected to do anything similar to this. So this part of the test is not communicative. Against this, one could argue that Part 2 tests linguistic functions such as speculation, suggestion and deduction, which are part of real-life communication skills. But testing a linguistic function is not the same as testing the ability to perform a communicative task.  

Candidates then have to do a ‘collaborative task’ in Part 3, in which they ‘come to a decision’ about some question given to them by the examiner. The thought is that they might have a discussion in which they agree on some ideas, disagree on others, and negotiate towards an outcome. Materials created for B2 and C1 preparation courses often have sections devoted to useful language for agreeing, disagreeing and negotiation, with this part of the exam in mind. Inevitably, during the exam no one dares to make any of the critical comments for which such language would be useful. Instead, candidates croak out their affirmations of each  statement made by the co-examinee and offer up generic remarks of their own.

Writing

My main objection to the Part 1 essay task is that it has overly detailed instructions – particularly in C1. It is as if the question were intentionally convoluted so that students fall into the trap of misunderstanding it – which is what almost anyone would do, regardless of their English level, unless they were trained not to. Once you do get your head around all the requirements of the question, the essay is effectively planned for you, and you have no leeway to structure your writing in the way you want, or to pose the opinions and ideas you want to pose about the topic. At the beginning they even add a statement of the form: “you have been discussing topic X in class with your teacher.” So, just in case you had any reasons of your own to be interested in the topic, they make sure to take that away from you as well. To the extent that the detailed rubrics help students plan their essays, it is also a worse test of the ability to write – of which planning is a basic subskill.5

Part 2 does offer some choice of what to write about, a fact likely made necessary by the tasks being only loosely connected to reality. To a student’s surprise on turning to Part 2 of the writing test, international journals want information, from them personally, on transport facilities in their hometown. Magazines want reviews of a historical drama with a character who influenced their views on modern life. Town councils are clamouring for their opinion on how to create more green spaces in the neighbourhood. There is an attempt to engineer a communicative context here, but it is a feeble and out-of-touch one.

Listening

In the land of Cambridge Listening, casual friends have lengthy exchanges about the secret motivations behind college publicity material. People leave minute-long answering machine messages, conveniently fitting the format of Part 1. Photographers accost actresses in clear, full sentences, and are then rebuffed in clear, full sentences. In contrast, the IELTS listening test at least involves the plausibly realistic contexts of training and education. It is at least believable that you might need to take notes on a university lecture, or to understand a conversation about train times. The Cambridge listening texts do not have a clear focus – they could be about anything so long as it fits the exam format. For the monologues, any person talking will do, and there is rarely any communicative purpose to their talk. It will be things like, ‘people talking about their best friend’, or ‘a person describing his job’.

[su_pullquote]It is not a test of listening skills but of exam technique.[/su_pullquote]

Even an authentic listening text can be made inauthentic if it is too far removed from its original context. The task which students are asked to do with it – for example, answering comprehension questions – can also make listening material less authentic.6 The Cambridge listening tests are not only inauthentic in these respects. The recordings themselves are designed around sets of questions, the answers to which are predictably nested among distractors so as to catch out the less exam-savvy candidates. It is not a test of listening skills but of exam technique.

Conclusion

Why does it matter if the Cambridge exams are not communicative? It matters because it means that learners who have had a large amount of formal instruction are at an advantage in the exam compared to those who have not. The problem is that most of the purposes for which we value proficiency in a language – forming relationships, communicating at work, educating and entertaining ourselves – do not require any of that explicit language knowledge. More importantly, the issues with the format and content of these exams mean that students who invest time and money in preparing for them spend little time actually learning English. Instead, they have to spend their time highlighting distractors in listening transcripts and discussing whether bullet points are acceptable in the Proposal genre.

To solve these problems, Cambridge could, for instance, further reduce or remove the Use of English component, simplify the Speaking and Writing sections, and prioritise authenticity over standardisation of format.7 As they are, the exams may have value for students who want to check their level or work towards a goal while learning English. But it is false to say that they assess real-life communication skills.

References

  • Breen, M. P. (1985). Authenticity in the language classroom. Applied Linguistics, 6(1), 60–70.
  • Grossman, D (2010). The Validity and Reliability of the Cambridge First Certificate in English. Centre for English Language Studies University of Birmingham. Masters in Teaching English as a Foreign or Second Language. Available here (PDF). [Last accessed 10th June 2020.]
  • O’Dell, F, and Broadhead, A. (2014). Objective Advanced Student’s Book. 4th ed. Cambridge University Press and UCLES.
  • Osbourne, C, Chilton, H, and Tiliouine, H. (2015). Exam Essentials Practice Tests. Cambridge English: First (FCE) 2. National Geographic Learning.
  • Roberts, C, and Cooke, M. (2009). “Authenticity in the Adult ESOL Classroom and Beyond. Tesol Quarterly, 43(1), 620-642.
  • Taylor, D. (1994). Inauthentic authenticity or authentic inauthenticity? TESL-EJ, 1(2), 1–11

Notes

  • Osbourne, C, Chilton, H and Tiliouine, H. (2015). Exam Essentials Practice Tests. Cambridge English: First (FCE) 2. National Geographic Learning, p 8.
  • O’Dell, F and Broadhead, A. (2014). Objective Advanced Student’s Book. 4th ed. Cambridge University Press and UCLES.
  • Grossman, D (2010). The Validity and Reliability of the Cambridge First Certificate in English. Centre for English Language Studies, University of Birmingham. Masters in Teaching English as a Foreign or Second Language. Available here (PDF), pp 18-19. [Last accessed 10th June 2020.] As another example, Part 5 of the practice test provided on the website for C1 Advanced is entirely based on finding details about the text, perhaps with the exception of the last question.
  • IELTS reading often requires learners to recognise opinions or ideas, and at least one of the reading passages has a detailed logical argument. Another difference is that IELTS reading tests have more than one type of question for each section. Perhaps this is because certain parts of a text lend themselves to particular types of question. In other words, the questions are better adapted to the texts. The full A4 pages of identically-formatted questions in Reading and Use of English are stuffy and contrived by comparison.
  • Berninger, V.W., Fuller, F. and  Whitaker, D. (1996). A Process Model of Writing Development Across the Life Span. Educational Psychology Review, September 1996, 8(3), The Development of Writing Skill (September 1996): 193-218, pp 194-195 and 197-198.
  • Breen, M. P. (1985). Authenticity in the language classroom. Applied Linguistics, 6(1): 60–70, cited in Roberts, C and Cooke, M. (2009). Authenticity in the Adult ESOL Classroom and Beyond. Tesol Quarterly, 43(1): 620-642, p 622.
  • If a less predictable format makes the test harder to prepare for, this effect could be mitigated by simplification. For example, the Part 4 listening questions – with Task One and Task Two both applying to the same recording – could be removed.

 

Related Topics

Leave a Reply

Your email address will not be published. Required fields are marked *