(The Language Connection | INDIRECTLY SPEAKING / Examining university English entrance exams | Daily Yomiuri, Jun. 27, 2011)
It’s easy to find commentary about university English entrance exams. The Internet is full of prospective and past students, as well as teachers, going over the minutiae of just about every such test in the country. A lot of criticism of these exams can also be found–much of it justified–as many are poorly written. After all, many committee members are not familiar with good testing practices and might even be placed on the entrance exam committee unwillingly.
Surprisingly though, very little is written about how to design such exams so they are valid and reliable. Administrators tend to say little on the matter, merely exhorting test makers to avoid mistakes on the exam. Test makers, meanwhile, rarely reveal their identities, a cloak of secrecy which allows little discussion as to how to make tests better. Yet this is precisely what many test makers need since preparations start as early as June. So, perhaps it would be useful to talk today about what makes a good English entrance exam.
Let’s start with the big picture. What is the purpose for giving such an exam? Answering with, “because it’s always been done,” “because it makes students study” or “because it generates income for the institution” is unhelpful. Since a test is valid only if it succeeds in meeting its purpose, an absence of clear purpose leads to exams that lack validity–meaning the most worthy examinees won’t necessarily succeed.
Ideally, one should use entrance exams as a means of identifying the students most suitable for future study in the department, school, or university. But just what does this mean?
Let me answer this first by saying what it doesn’t mean. It doesn’t mean that the entrance exam should be a measure of real world English competence. The ability to function in the real world in practical circumstances is a vital skill, but entrance exams are for entry into Japanese academic institutions in Japan, not preparation for homestays or work abroad. Nor is it a summary of high school achievement in English. An entrance exam should be forward-looking, more of a placement than an achievement test. Thus it is not a culmination of secondary school education. Nor is it a glorified TOEIC test or a measure of receptive discrete-point knowledge on a topic.
A good English entrance exam should tell you something about the academic abilities of a prospective student. Thus, it should be academic, not informal. But “academic” need not imply that it be dry, focus upon arcane detail, or couched in language and tasks that would flummox a PhD. Rather it should aim to measure the candidates’ ability to think and communicate intelligently, manipulating their English skills and knowledge. You should hope to see strategic and problem solving competence–not merely random knowledge of facts about English or awareness of the language’s obscurities. This can also reveal the candidates’ personalities.
A good entrance exam should measure various English skills (After all, there are many learning styles). A good test should attempt to engage higher levels of cognition, such as recall and reproduction. A good test will have as many productive tasks–where the examinees have to create and produce language–as it will passive, receptive items, where the text is fully created and controlled by the test-makers. A good test will measure the students’ abilities to summarize, predict, respond appropriately, create and extrapolate meaning, and paraphrase–not merely translate. A good test will allow room for self-expression, strategic thought, and expansion of content. A good test asks examinees to understand and interpret, not just to skim and scan. There will be focus on comprehending gist as well as specific information.
The texts of a good test should address a variety of themes, topics, and genres (narratives, reports, opinion pieces, fiction, visuals) using a variety of task and question types. It should never be predictable. It should demand flexibility and finesse from the examinees as opposed to testwise-ness–the ability to play along with the testing system.
Reliability is compromised when tasks focus upon narrow or esoteric content rather than challenging wider-ranging skills. When questions are extremely tough (some seem to only show off the test maker’s knowledge) and have specific answers, you may find that 95 percent of the candidates miss them–whether they are good students or not. When you do this you basically reduce all the participants to the same level and fail to separate the better candidates from the lesser. To effectively stratify examinee results, a better aim would be for 25 percent difficult, 50 percent average, and 25 percent relatively easy items.
Question items that have wider ranging, more relevant applicability are to be preferred. For example, knowing how to combine sentences into a paragraph using appropriate connecting forms is more of a meaningful, holistic skill than knowing the meaning of obscure, specialized vocabulary items.
Too many multiple choice or matching-type questions add to randomness, increasing unreliability. With four choices available even a trained monkey will be able to score 25 percent. When success may depend upon a mere one point differential, you don’t want randomness to become a determining factor.
A good exam considers weighting umm…heavily. Not all items and tasks should have the same value. Too much weight concentrated on one skill or task can threaten test validity.
A good exam should have some flow, some sense of direction. It should hold together as a unit and not just be a collection of discrete questions (which tends to happen when there are too many hands are on the test committee).
Any English exam reflects a view of the language. If your entrance exam emphasizes randomized discrete-point knowledge this exposes your view of what your institution thinks English is. So, if you regard language as holistic and dynamic, as an active means of communicating meaning, then your exam should reflect this.
Finally, entrance exams often have a washback effect on lower levels of education. Poorly designed tests that are simply a collection of limited response questions, or those that reward rote memorization, will adversely affect how English is studied and taught at these lower levels. Well-designed English entrance exams, on the other hand, will have a positive effect on English teaching.
Guest is an associate professor of English at Miyazaki University. He can be reached at firstname.lastname@example.org.