I’ll be blogging live this weekend from Wabash College, where I’m attending a NITLE-sponsored conference (Pedagogy and Digital Technologies: Language Labs in the 21st Century). Keep an eye on this page for frequent (read: unpolished) updates throughout the day tomorrow and Sunday morning, and I’ll post a more complete and fleshed-out recap early next week.
Tonight’s report: Carl Blyth, Pondering Learner Preferences: The Role of Formative Evaluation in the Development of Digital Materials.
A quick tip to conference organizers: never, ever position a keynote speaker’s laptop so that he has to turn his back on the audience to see his presentation. It makes it hard for them to read from their slides. (How insensitive.)
A quick tip to keynote speakers: it makes me cringe when I see you open Internet
Exploder Explorer on your laptop. Please, for the love of humanity, Get. Something. Else. Anything else. And, just because you can read from your abstract and then read from your slides does not mean you should.
I walked into this presentation skeptical of the ability of a state-uni prof to offer much to us at liberal arts colleges, and walked out even moreso. We don’t have $500,000 (the amount committed by UT-Austin to the three projects discussed in the keynote). We don’t have design teams or programmers. The head of Prentice Hall doesn’t ask us what he needs to do to get us to adopt his brand-spanking-new textbook. I’m glad that -you- do; I wish that all educators had the money and the people and the time and the influence they needed to get their jobs done. But when you work in an environment of plenty, and have for over a decade, how can you possibly imagine/remember what it’s like to work in the trenches with whatever you can cobble together in your “copious free time” and little-to-no money?
For example: one of the conclusions was that we should build our own materials. It’s true – language textbooks and the materials that accompany them generally suck. They’re expensive to produce, and as a result have to aim for the lowest common denominator, which in turn means they work equally poorly for everyone. Revamping them takes time and money, of which most language technologists have little. –What’s that you say? Intercampus collaboration? It makes great dessert talk but only when you avoid the most pertinent issues: who’s going to foot the bill? Who’s going to oversee/host/manage/maintain said collaboration? Besides, if I had time to collaborate, I wouldn’t need to do so.
Another topic that we’ve touched on repeatedly here on LLU, the student-centered curriculum, also came up this evening. From the presentation’s abstract:
While formative evaluation results in a more learner-centered curriculum with more user-friendly technology, it also presents thorny challenges. For example, do students really know how they learn best? How can developers discern when student wants indicate legitimate needs? And what about the wants and needs of the developers?
I’m glad that you have developers. I wish we all did. But the wants and needs of developers are completely and absolutely irrelevant in this situation. As for students: do they really know how they learn best? Maybe so, maybe not. As my colleague Ines (a German faculty member also in attendance) and I discussed, students often come to college lacking basic language learning strategies. As educators and as technologists, our job is not to determine which approach will work for each student, but to present students with many different options and let them decide for themselves.
I do need to give credit where credit is due; at one point Carl stated that we can never be sure what students want unless we ask them. That is absolutely true, and something that a lot of faculty and technologists don’t get. But for him, students’ wants and needs are still discrete groups:
Through the process of formative evaluation (i.e.,learner reactions and developer responses), the developers tried to strike a balance between what students said they wanted (i.e., more decontextualized language practice) and what developers believed that students needed (i.e., more contextualized language use).
Students know when a strategy does or doesn’t work for them, even if they don’t have the background in theory pedagogical vocabulary to express it.
Speaking of pedagogy – what’s the effect of all of this on student learning? When posed with the question, Carl announced that the materials really helped on the “attitudinal scale” and that enrollments were positively affected (which made a good selling point to the administration, apparently). But he also admitted that the effect on the learning of the students who used the programs was negligible. So what’s the point, then, of continuing the program? And if a program with an abundance of resources can’t successfully take textbook materials and make them into something that actually helps improve learning, why should I try the same?