Search
Close this search box.

by Steve Hirschhorn

Testing Language

Language tests exist for several main purposes, some of these are: for placement into a language course, to assess the suitability of job candidates, to assess suitability for a programme of study, to assess progress over a period of language study and to assess progress at the end of a period of language study. All of these require a slightly different approach to the test content; some examine previously studied material, some assess functional and lexical ability and some assess academic language or some other particular style or register.

All will almost certainly scrutinise grammatical knowledge, defined as “producing or comprehending formally accurate utterances or sentences. This includes knowledge of vocabulary, syntax, phonology and graphology.” (Bachman and Palmer 1996, p68) Note the use of the term ‘formally’, it is significant.

Description and Procedure

The notion of testing speaking has been around for a long time and yet it has often focused too heavily on accuracy despite some authors’ attempts to shift the attention onto realistic language use. Weir says this: “To test whether learners can speak, it is necessary to get them to take part in direct spoken language activities.” (Weir, 1995, p 31) So we have been aware for some time that in order to test speaking, we should be placing participants into real-life situations to allow them to demonstrate their abilities.

But their abilities to do what? To show they can use the 2nd conditional? To accurately manage the past perfect?


As early as 1980, although discussing written language tests, Wilkinson et al were arguing that the formal testing of aspects such as sentence complexity, should be augmented by taking into account quality of thought, feelings and style, thereby already challenging the traditionally accepted parameters.

In fact, many language tests assess grammatical accuracy and little else, even if they purport to. Perhaps this is because it’s relatively easy to do so; grammar can be reduced to a right/wrong answer, even if I would argue that it shouldn’t be. All too often, little or no account is taken of a learner’s ability to successfully undertake a communicative transaction. And yet, grammatical accuracy is a tiny part of what is required to achieve that desired outcome – it could be argued that lexis is much more important since we can often make our meaning clear using only lexical units.

In addition, the ability to use whatever other verbal and non-verbal strategies we may possess to support our communication, can be invaluable. Some of these tactics fall into the category of Strategic Competence – the capacity to use almost any device to assist in our communications when we don’t have or can’t access the right form or term.

Unfortunately, this trend, to develop more realistically representative language tests was not generally taken up in great quantity. Perhaps this may be in part explained by Harris’ early observation that: “No language skill is so difficult to assess with precision as speaking ability” (Harris, 1969, p81). This is self-evident but surely depends on exactly what our assessment criteria are. Since the late 60s, we just might have developed our ideas on this topic – or have we?

Rationale for a different style of test

If we are to value language use in terms of the communicative ability to achieve an aim, then it is eminently possible and perhaps desirable to assess functional, communicative language and strategic competence (SC) – i.e. the capacity to make oneself understood using whatever means are available including mime and gesture for productive skills (PS) and questioning, asking for repetition and regrading in receptive skills (RS).

We need not completely abandon grammatical competence but instead subordinate it to functional, communicative, intelligible use.

In the real world, most English language learners will communicate with non-native speakers (NNSs) rather than native speakers (NSs) for business or whatever their field is – even visiting an English speaking country doesn’t guarantee that they’ll be speaking with Nss, so why should we overrate the need to produce native-like utterances or understand native speakers? Surely we should be aiming for a broad production capacity and a flexible, interpretative receptive ability. In other words, our students should be able to communicate in a great variety of contexts.

It is also true that real-world language is less grammatically accurate than we may think; most informal NS language acts contain ‘errors’, repetitions, false-starts, interruptions and pauses which are almost never represented in English Language Teaching (ELT) or formal testing situations. As an example, NSs rarely use a ‘pure’ conditional form, they tend to pick and mix, so to speak. The emphasis is still very much on ‘correct’ grammar both in ELT and in testing, probably because grammar is accessible as noted above, can be judged as ‘right or wrong’ and still represents for many, the rules base of the language – though these notions are easily challenged…

If we are to change our current testing model, then we can examine the possibility of assessing the success of a communicative language act by observing the result. If I want to buy an umbrella in Brazil but don’t know the word for that item, I can use mime, gesture and whatever words I have which can help to achieve the aim, I come out of the shop with my new umbrella and have therefore succeeded!

Strategic Competence

So what exactly is SC? Here I offer some of the positive features which comprise SC.

Positive features

 

  • Deliberate error: the learner uses an item which s/he knows is incorrect but will get the message across.
  • Description: the learner describes the object perhaps also using mime or gesture.
  • Circumlocution: the learner finds a way around the item (lexis or structure).
  • The learner asks for help.

 

The positive features of SC contribute to the overall achievement of a goal, albeit without necessarily relying on a formal, structural support system. But whatever level a 2nd language speaker may be (indeed a 1st language speaker too!) SC can play a significant role in one’s interactions, so why not include it in a teaching programme and why not assess it?

The Communicative Language Test

The test I have created and deliver is divided into PS and RS sections and students, in pairs, are given tasks which require them to listen, recognise, react, interact and complete a number of communicative tasks successfully. The students are grouped in 4 age ranges to ensure a) relevant topics, b) appropriate concentration time and c) appropriate tasks.

Thus the lower age group, 7 – 10 year olds have about 10 minutes per pair in the face to face testing context. 11 – 14 yr olds have about 13 minutes, 15 – 18 have 15 minutes and adults have about 20 minutes testing time, though a few minutes to relax and settle is of course programmed in for all groups. The ability to achieve the communicative aims is measured and valued as the most important feature but accuracy in grammar and phonology is not abandoned, it is assessed, though based on the notion of intelligibility rather than proximity to some random NS model.

Students cannot fail, they receive instead a result based on a fine-tuned CEFR model plus a SC score. Students of all ages also benefit from a test which will give them an opportunity to tackle real-world tasks preparing them – it might be argued – for the real world! Indeed, students can take this test year after year if they wish, thus monitoring their own development while using slightly different materials each year.

The next test dates will be in Bulgaria during June, organised and supported by English@ElpidaS language centre in Sofia but delivered in two different cities.

Please feel free to contact me for further information on delivering this new testing suite in Bulgaria or wherever in the world you are.
[email protected]


Bibliography

Bachman, L. F. & Palmer, A.S. Language Testing in Practice 1996 OUP
Harris, D.R. Testing English as a Second Language 1969 McGraw Hill
Rivera, C. An Ethnographic Sociolinguistic Approach to Language Proficiency Assessment 1983 Multi Lingual Matters
Stern, H.H. Fundamental Concepts of Language Teaching 2001 OUP
Weir, C. Understanding & Developing Language Tests 1993 Phoenix ELT
Wilkinson, A. et al Assessing Language Development 1980 OUP

 

Related Topics

Leave a Reply

Your email address will not be published. Required fields are marked *