For anyone seeking my presentation documents through a link to this blog, they are available in the previous entry, dated 4/5/10.
Though the BSI Committee no longer requires us to submit a report after attending funded activities, I found last year that writing about CATESOL 2009 helped me retain more of what I’d learned at that conference; hence this report on “The Big One” in Boston two weeks ago, from March 25-27.
I attended several presentations on corpus-based studies, and many on various aspects of language testing and language proficiency assessment. A few miscellaneous presentations will be addressed at the end under “Other.”
Corpus-based Studies
These are becoming increasingly mainstream in the language teaching field. I was glad to have been able to learn a bit about it last semester by returning to SDSU for a course in corpus linguistics. To provide a brief background, a corpus-based study can consist of analyzing in different ways an existing corpus, such as MICASE (Michigan Corpus of Academic Spoken English). One can also construct an original corpus, selecting a large body of related texts, inputting them electronically if necessary, converting them to plain text and saving them as a single text, then running them through a program such as WordSmith or NLTK that tokenizes and tags the text. This means that each word is individually coded and identified—with varying degrees of sophistication—for POS (part of speech). The process takes only a few seconds with one of these programs.
Using a computer language to manipulate variables, the text can then be analyzed in various ways. Corpus studies ordinarily focus on word frequency, or on word collocation—trends in the interrelationship of one word to another—such as verb + preposition in phrasal verbs. Perl has been the most common language for corpus studies in the past, though my course at SDSU employed the more user-friendly Python with the Natural Language Tool Kit (NLTK) developed at the University of Pennsylvania and available online. Several presenters at TESOL ‘10 were interested to hear about NLTK, and I hope it will become more widely known and used in the future.
One of the most prominent corpus linguists is Douglas Biber of Northern Arizona University, who attended and presented at the convention. I missed his, but attended presentations by several of his former students. A group of four from Miami Dade College and Georgia State University presented on the use of a corpus-informed curriculum for content-based instruction in four general education courses: Biology, psychology, freshman composition, and humanities. Four classes in each area, covering 16 hours of class time, were transcribed using Panopto for video capture. It was found that words from the Academic Word List (AWL) are used differently between courses, that the most common words are discourse organizers such as know, going, now, you have to. Academic language in general features hedging, frequent defining of terms, and “sign-posting”, i.e. “Now that we’ve tokenized the text, we can manipulate it in different contexts.”
Eric Friginal of Georgia State University presented on the integration of corpus-based approaches in ESL writing instruction, showing how simple concordancing software such as MonoConcPro can be used by advanced ESL writers to improve their use of appropriate academic discourse, focusing in his study on linking adverbials, reporting verbs, and verb tenses. He cited a number of studies and recent publications that explore trends in vocabulary acquisition and the mastery of grammar, as well as the use of corpus tools for the teaching of specific skills in genre-based writing.
My own opinion is that there is still enough novelty to corpus-based research that it retains a bit of snob appeal, and is employed at times more as a sophisticated toy than as a tool to supplement common sense and native speaker intuition.
Testing & Language Proficiency Assessment
Teresa O’Donnell of CEA (Commission on English Language Program Accreditation) conducted a pre-conference institute on student learning outcomes and their assessment. Though unable to attend it, I emailed her before the conference to see if I was leaving out anything in my own presentation that I ought to be aware of. She replied that my presentation was consistent with the objectives of her workshop, and apparently recommended it to some of the workshop participants.
Alicia Munoz of Cuyamaca College presented with a group of California community college teachers on the CB-21 coding project. Though I can talk to her just about any time in San Diego, we discussed aspects of it that I hadn’t been clear on and it was interesting to hear the historical perspective on it as a teacher-based initiative.
Jim Pettersson and Tom Henry of Utah Valley University gave a very detailed workshop on practical guidelines for reliable grading. Little in it was new, but it was an impressive compilation of information about rubric development, types of grading, approaches to disputes over grades, types of feedback, and the legal implications of The Family Educational Rights & Privacy Act. The latter led me to add a disclaimer to the writing samples from my own presentation and to further modify them to protect student confidentiality.
In a surprisingly sparsely attended colloquium moderated by Deborah Kennedy of CAL (Center for Applied Linguistics), a test developers’ panel representing ACT (The COMPASS Test), CASAS (Comprehensive Adult Student Assessment Systems), CTB (The TABE Test, and most famously the inventors of the #2 pencil), and CAL’s own BEST Test (Basic English Skills Test) gave presentations that were almost but not quite sales pitches for their respective assessment instruments. Another frequently used acronym was NRS (National Reporting System requirements, U.S. Department of Education). Apparently, none of these instruments are at this time usable as placement instruments under NRS requirements, which is why the CELSA is still the only acceptable instrument in California. The COMPASS Test has an optional capability for artificial intelligence machine-scoring of writing samples with instant feedback. The ACT representative talked with me about it afterwards and called it “scary”. The company that developed the AI software is very secretive about it, though from my single computational linguistics course it seems a very sophisticated use of naïve Bayes.
In a colloquium on the use of the Common European Framework of Reference (CEFR) and other proficiency scales, JoAnn Crandall and five others discussed the implementation of standards for EFL contexts. The CEFR was frequently encountered throughout the conference, and the panel enjoyed my Rip Van Winkle-like perspective on the development of CEFR from the work of Van Ek in the early ‘80s, which was prominent in my own master’s thesis. The CEFR and the ACTFL scale (which begot California Pathways which begot CB-21 coding for ESL) are distant cousins, with a certain amount of “cross-pollination” between them.
For the remaining presentations, the titles give a pretty good indication of the topic and content: Differentiating Language Proficiency and Developmental Writing Skills for ELLs (children), Teacher Created Benchmark Assessments of Writing Aligned With ELP Standards (secondary school learners), Student Writing Without Teacher Death: Low Workload Supplements for Composition (the perennial problem of time-consuming grading), and Assessing a New Writing Task Type: Short Answer Responses (SARs).
Other
The Thursday plenary, TESOL: Past, Present & Future was a sleeper. I wonder if officers and former officers of the organization realize how boring they are when they joke around for an hour about matters and past events few others know or care about.
The Friday plenary by Maryann Wolfe on the other hand, The Evolving Reading Brain: Implications for Cognitive and Linguistic Development, was fascinating. The visuals included a number of brain scans showing the parts of the brain used during reading in various languages, including those with syllabaries and pictographs.
I attended a whimsical presentation by Greta Vollmer on The Visual Rhetoric of Everyday Genres, which included the analysis of cereal boxes and CD covers. I commented that she was apparently a fan of Roland Barthes, and several of us got into a discussion of semiotics afterwards.
The last presentation I attended on the last day of the conference was Social Responsibility: What and Why? This was mainly because Kip Cates, a founder of the Social Responsibility IS (Interest Section), was an old friend from Japan days whom I hadn’t seen in fourteen years. Several former presidents of TESOL, retired or nearing it, have become active in this IS. The session seemed a bit touchy-feely; I felt like telling one of the officers at one point to “man-up!” It is, however, the newest IS and an important development in the organization.
Conclusion
The language teaching field seems to be going through a rather self-confident phase. There are conference presentations on brain function, an almost smug knowingness among corpus linguists, and a tremendous renewed interest in general proficiency scales (particularly CEFR). The latter was my main area of interest until the mid-‘90s, when the “eclectic method”, task-based learning, and the "Gosh, we really don't know as much as we thought we did!" attitude then in vogue encouraged such scorn for the concept of general language proficiency that I basically went into hibernation for a decade and a half. It would be wrong to say that I missed nothing during that time, but with a certain amount of effort it wasn’t hard to get back up to speed. It was nice to give a well-received presentation of my own and to contribute something to the discussion.
1 comment:
Great report, Kevin! I plan to write mine in a more piecemeal way.
Thanks for sharing your insights and experiences.
Lee
Post a Comment