Site Search  
RSS feed

Teacher newsmagazine

BCTF Online Museum
BCTF Advantage

Teacher Newsmagazine   Volume 24, Number 4, January/February 2012  

Lower Mainland English Reading Assessment 

ESL assessment consortium 

In June 2008, as representatives from 12 school districts and a university, we convened a group that established the Lower Mainland ESL Assessment Consortium (www.eslassess.ca). The goal of the consortium is available on the website. The first activity was to review the assessment instruments and procedures employed in the separate districts to assess EAL students.  

Assessment survey results 

Results were surprising. The Idea Proficiency Test (IPT) (Ballard, Dalton, & Tighe, 2001) was the measure most often used with primary level students. Other assessments were the Brigance, (1983) the Bilingual Syntax Measure (Burt, Dulay, & Hernández, 1973), the Woodcock Reading Mastery Test (Woodcock, Various), the Woodcock-Munoz, (Woodcock & Munoz-Sandoval, 1993) the Pre-IPT, the Comprehensive English Language Test (Harris & Palmer, 1986), informal reading inventories, the Waddington Diagnostic Reading Inventory (Waddington, 2000), the Alberta Diagnostic Reading Inventory, the SLEP, the Gap (McLeod & R. McLeod, 1990), PM Benchmarks (a system for placing students in levelled books), the RAD (Reading Achievement District—a local assessment measure), the Peabody Picture Vocabulary Test (Dunn & Dunn, 1997), and a variety of locally developed listening, speaking, reading, and writing assessments.

The wide diversity of assessment instruments and interpretation protocols represented a serious problem because of high student mobility rates. Since there was no assessment uniformity, students who transferred had to be assessed a second, and sometimes a third time. Levels distinctions also differed across, within and between districts, so level three, for example, did not always represent the same language abilities. There are serious consequences for learners when assessment results are neither valid nor reliable (Shohamy, 2000).


There was consensus among consortium members that a secondary-level measure (Grades 8 to 12) should be developed. It was concluded that the measure should be easy to administer and simple to score and that scores should be categorized into levels. It was concluded the measure should result in four and five levels. It was concluded that a reading test involving the “maze” procedure would be developed and that it should contain text approved for the provincial curriculum and so passages were selected from science, math, social studies, and language arts materials. Four different readability measures were used to estimate grade levels for the passages (Fry, Flesch-Kincaid, Flesch Reading Ease, and Forcast). Eight passages varying from second to twelfth grade were selected to form the assessment. The time limit established for the administration of the test was 35 minutes.

The LOMERA Study 

The LOMERA was administered to 4,810 students in Grades 8 to 12 in 11 school districts; including 1,363 English speakers. Hundreds of teachers and their students participated. A test manual was developed that includes level designations. Percentile scores and other descriptive data are presented in the manual. The LOMERA is in use in a number of districts.

Alternate and online forms 

An important related goal was to develop alternate forms of the LOMERA so the test/retest cycle would continue to be more valid and reliable. One alternate form has been developed and has been placed on the consortium website. Three other forms are in development.

An on-line version was developed. Murphy Odo (in process) administered the LOMERA to students in both forms and has concluded that they are highly comparable.


The number of new immigrant students who enrol in provincial schools continues to increase. Valid and reliable assessment is a cornerstone of thoughtful instructional planning. The secondary LOMERA, however, was never designed to be a diagnostic assessment. It has been criticized as a measure that does not provide diagnostic information for teachers. It has been criticized as a measure that does not provide much information about individual students’ processing of connected discourse or their oral reading fluency. The LOMERA was designed to be an easily administered assessment that provides broad-stroke information about English-reading levels. It does so quite successfully. It involves a substantial ESL norm group and is not meant to be the only assessment administered.

The ESL Assessment Consortium was established to explore assessment issues. With cut-backs in budgets and ESL personnel affecting school districts, it is essential that issues related to ESL assessment and instruction be explored. A solid foundation has been developed for further consortium activities.

Members of the consortium  

  • Mark Angerilli, ESL Assessment helping teacher, School District 36 (Surrey)
  • Karen Beatty, district teacher—ESL, School District 35 (Langley)
  • Reginald D`Silva, University of British Columbia
  • Lee Gunderson, University of British Columbia
  • Sylvia Helmer, visiting adjunct professor, UBC/VSB
  • Catherine Humphries, program consultant, English language learners, School District 41 (Burnaby)
  • Betty Kosel, ESL consultant (Retired), SD 39 (Vancouver)
  • Raffy LaRizza-Evans, district ESL support teacher, SD 37 (Delta)
  • Daphne McMillan, district ESL resource teacher, School District 40 (New Westminster)
  • Dennis Murphy Odo, University of British Columbia
  • George Monkman, ESL co-ordinator, School District 44 (North Vancouver)
  • Donna Neilson, district ESL resource teacher, SD 45 (West Vancouver)
  • Dale Shea, ESL co-ordinator, School District 43 (Coquitlam)
  • Diane Tijman, ESL & multiculturalism co-ordinator, School District 38 (Richmond)
  • Julie Wright, ESL helping teacher, School District 34 (Abbotsford)


  • Ballard, W., Dalton, E., & Tighe, P. (2001). IPT 1 oral grades K-6, Technical Manual. Brea, CA: Ballard & Tighe.
  • Brigance, A. H. (1983). Brigance Comprehensive Inventory of Basic Skills II (CIBS II). North Billerica, MA: Curriculum Associates.
  • Burt, M. K., Dulay, H. C., & Hern·ndez, E. (1973). Bilingual syntax measure. NY: Harcourt Brace Javonovich.
  • Dunn, L. M., & Dunn, D. M. (1997). Peabody picture vocabulary test. San Antonio, TX: Pearson.
  • Harris, D. P., & Palmer, L. (1986). CELT examinerís instructions and technical manual. New York: McGraw-Hill. McLeod, J., & R.
  • McLeod. (1990). The GAP. Novata, CA: Academic Therapy Pubication.
  • Shohamy, E. (2000). The power of tests: A critical perspective on the uses of language tests. Essex: Pearson.
  • Waddington, N. J. (2000). Diagnostic reading and spelling tests 1 & 2: A book of tests and diagnostic procedures for classroom teachers. Strathalbyn, South Australia: Waddington Educational Resources.
  • Woodcock, R. (Various). The Woodcock Reading Mastery Test. New York: Pearson.
  • Woodcock, R., & Munoz-Sandoval, A. (1993). Woodcock-Munoz language survey. Itasca, IL: Riverside.

  • FacebookTwitterYouTube
  • TeachBC
  • BCTF Online Museum
  • BCTF Advantage