NITLE: Pondering Learner Preferences

I’ll be blogging live this weekend from Wabash College, where I’m attending a NITLE-sponsored conference (Pedagogy and Digital Technologies: Language Labs in the 21st Century). Keep an eye on this page for frequent (read: unpolished) updates throughout the day tomorrow and Sunday morning, and I’ll post a more complete and fleshed-out recap early next week.

Tonight’s report: Carl Blyth, Pondering Learner Preferences: The Role of Formative Evaluation in the Development of Digital Materials.

A quick tip to conference organizers: never, ever position a keynote speaker’s laptop so that he has to turn his back on the audience to see his presentation. It makes it hard for them to read from their slides. (How insensitive.)

A quick tip to keynote speakers: it makes me cringe when I see you open Internet Exploder Explorer on your laptop. Please, for the love of humanity, Get. Something. Else. Anything else. And, just because you can read from your abstract and then read from your slides does not mean you should.

I walked into this presentation skeptical of the ability of a state-uni prof to offer much to us at liberal arts colleges, and walked out even moreso. We don’t have $500,000 (the amount committed by UT-Austin to the three projects discussed in the keynote). We don’t have design teams or programmers. The head of Prentice Hall doesn’t ask us what he needs to do to get us to adopt his brand-spanking-new textbook. I’m glad that -you- do; I wish that all educators had the money and the people and the time and the influence they needed to get their jobs done. But when you work in an environment of plenty, and have for over a decade, how can you possibly imagine/remember what it’s like to work in the trenches with whatever you can cobble together in your “copious free time” and little-to-no money?

For example: one of the conclusions was that we should build our own materials. It’s true – language textbooks and the materials that accompany them generally suck. They’re expensive to produce, and as a result have to aim for the lowest common denominator, which in turn means they work equally poorly for everyone. Revamping them takes time and money, of which most language technologists have little. –What’s that you say? Intercampus collaboration? It makes great dessert talk but only when you avoid the most pertinent issues: who’s going to foot the bill? Who’s going to oversee/host/manage/maintain said collaboration? Besides, if I had time to collaborate, I wouldn’t need to do so.

Another topic that we’ve touched on repeatedly here on LLU, the student-centered curriculum, also came up this evening. From the presentation’s abstract:

While formative evaluation results in a more learner-centered curriculum with more user-friendly technology, it also presents thorny challenges. For example, do students really know how they learn best? How can developers discern when student wants indicate legitimate needs? And what about the wants and needs of the developers?

I’m glad that you have developers. I wish we all did. But the wants and needs of developers are completely and absolutely irrelevant in this situation. As for students: do they really know how they learn best? Maybe so, maybe not. As my colleague Ines (a German faculty member also in attendance) and I discussed, students often come to college lacking basic language learning strategies. As educators and as technologists, our job is not to determine which approach will work for each student, but to present students with many different options and let them decide for themselves.

I do need to give credit where credit is due; at one point Carl stated that we can never be sure what students want unless we ask them. That is absolutely true, and something that a lot of faculty and technologists don’t get. But for him, students’ wants and needs are still discrete groups:

Through the process of formative evaluation (i.e.,learner reactions and developer responses), the developers tried to strike a balance between what students said they wanted (i.e., more decontextualized language practice) and what developers believed that students needed (i.e., more contextualized language use).

Students know when a strategy does or doesn’t work for them, even if they don’t have the background in theory pedagogical vocabulary to express it.

Speaking of pedagogy – what’s the effect of all of this on student learning? When posed with the question, Carl announced that the materials really helped on the “attitudinal scale” and that enrollments were positively affected (which made a good selling point to the administration, apparently). But he also admitted that the effect on the learning of the students who used the programs was negligible. So what’s the point, then, of continuing the program? And if a program with an abundance of resources can’t successfully take textbook materials and make them into something that actually helps improve learning, why should I try the same?

4 Comments

  1. Carl Blyth · September 30, 2006 Reply

    Ahhhh…where to begin?

    Presentation setup–totally agree that the physical setup was very strange indeed. The laptop was behind me and I was positioned in the center of the room. I couldn’t see the screen. I couldn’t see the computer. I couldn’t see half the audience. I have never given a talk where the physical layout was so disorienting. ARGH! Giving a good presentation depends on many different details: physical set, computer, acoustics. I think the technology worked reasonably well but the Dell laptop was not my favorite (I’m a Mac user). Moreover, I use Safari not Explorer. As a consequence, there was a slight disorientation factor given that the interface (PowerPoint, Explorer) were different from what I am used to. I actually do use both Macs and PCs and felt that I could have gotten the hang of the interface had it not been for the strange layout–facing away from my computer. Lesson learned–use your own equipment!

    Healthy skepticism–Your comments about being skeptical regarding my talk (“What can a professor at a big state university have to say to those of us working in small liberal arts colleges?!”) really resonated with me. I had the same reaction when I was initially contacted by NITLE. A consortium of small liberal arts colleges? Hmmm…not exactly my personal situation (University of Texas has 50,000+ students). So, I thought about this very issue for some time and tried to come up with a talk that I thought would be relevant to ALL foreign language professional, especially those who are involved in curriculum development.

    Granted, my situation is very unlike the member’s of the audience. But that was not the point. My point, which apparently you strongly agreed with, was that formative evaluation needs to be part of your developmental cycle. It is indispensible. And yet, while most language professionals would agree, few make the effort to integrate serious, on-going formative evaluation into their practice/development. My intent was to show the impact that formative evaluation can/should have on our materials and classroom practices. Don’t be so literal! I decided to give my presentation as a narrative rather than a research report BECAUSE that underlined the particularities of my situation. Those aspects of my personal narrative that diverge from your own ‘story’ should not keep you from deriving benefit from general principles. But it seems that the leap was too great or that the details of my personal situation too off-putting (“He has a developmental team and lots of $$$. I don’t!”).

    Let me restate the main points without the narrative noise:
    1) Formative evaluation data are so useful yet so rarely gathered in a systematic fashion.
    2)It takes a lot of effort (and time) to tease apart needs from wants. I don’t believe that students always know how they learn best. In fact, they don’t always know what they want! That is precisely why it took us 10 years and multiple methods of assessment.

    Your comments bring to mind something that Felix (Pomona College) was saying in the morning session about the future of language labs. He argued (persuasively, I thought) for technology with ‘quick and dirty’ production values rather than products that require professional programming and design teams. Personally, I think there is room for both. They both have their place.

    onward,

    carl

  2. Carl Blyth · October 2, 2006 Reply

    I wanted to respond to the question raised in the last paragraph of the earlier post: why go to all this bother when apparently the performance gains are negligible? Or some to that effect. Well, yes, good question. What I said is that our study showed that students using our online materials and those using the same printed, offline materials showed few statistically significant differences as measured by a standardized test, e.g., typical four skills, discrete item test. There was one area where the students using our online program outperformed the offline counterparts–listening comprehension. I think the reason is pretty straightforward. The audio files in the online program as much more accessible than the analog tapes and stuents simply did more listening.

    But the bigger point here is one about assessement (see subsequent posts). Note that we used traditional performance based measures (standardized proficiency tests) to assess the impact of our online materials. And using the traditional grammar-driven tests we showed that the new materials resulted in similar results. But I am convinced that the students using the rich, multimedia materials learned many things that we did not attempt to assess. Hypermedia and social software produce different results that can only be understood when we develop better methods of assessment (see the two Barbaras’ posts to that effect). There is LOTS of incidental language/culture learning going on that simply does not show up when discrete item proficiency tests are the metric.

    But I am worried by your dismissive tone about attitude and motivation. The main finding in our formative evaluation data was that students really liked our hypermedia materials. That is HUGE. Motivation is central to all learning and even more important in language learning that requires years of concerted effort. Most applied linguistic research studies show again and again that MOST pedagogical treatments/methods are effective to a certain degree. Most methods work. Language learning is developmental and guided by powerful cognitive processess. IMHO, teaching methods and materials can’t really change the route of acquisition which is developmental. The most we can do is speed things up a bit and keep ’em excited so they will continue their studies. In fact, I would argue that the real job of FL educators is to help our students successfully achieve intermediate proficiency and then to motivate them to study or travel abroad. To push them into immersion contact with the L2 culture. But i digress.

    My basic points are these: 1) Our performance findings are pretty much in keeping with the literature on methods (they all work to a degree, but there is no panacea when it comes to developing real communicative competence which grows ever so slowly; 2) Motivating students to pursue further study is key in language learning since the endeavor lasts several years and most students burn out in the early stages.

  3. Ryan · October 3, 2006 Reply

    Formative evaluation data are so useful yet so rarely gathered in a systematic fashion.

    Agreed, but that begs the question: if most language professionals understand its importance, why doesn’t formative evaluation occur more systematically?

    It takes a lot of effort (and time) to tease apart needs from wants. I don’t believe that students always know how they learn best. In fact, they don’t always know what they want! That is precisely why it took us 10 years and multiple methods of assessment.

    I think that, provided they’ve had adequate exposure to different learning strategies, students know how they learn better than we ever will. They may not always have the vocabulary to express that…but that doesn’t mean the knowledge doesn’t exist on some instinctive level, and I think we need to respect that. Nevertheless, it’s true that separating wants and needs is tricky; the problem is, many language professionals simply don’t have the resources (time, money, staff) to spare for such an endeavor. That’s why I find your personal narrative so compelling – your particular environment allowed these programs to be developed.

    I bring this up not to whine, but to pose the following questions: How do small liberal arts colleges apply these general principles when we’re already squeaking by on our current funding? Can/should we depend on the kindness of large universities? Are the tools large universities create appropriate on a smaller scale in a different environment? etc etc etc

    As for motivation:

    In fact, I would argue that the real job of FL educators is to help our students successfully achieve intermediate proficiency and then to motivate them to study or travel abroad.

    Yes and no. At some point, educators (in general, not just FL educators) need to step back and let students find their own motivation. IMHO, if students are going to learn, they have to meet educators halfway…at the same time, educators need to give them the chance to do so instead of constantly dragging them along. For example: in one of this weekend’s sessions, students had the opportunity to react to Todd Bryant’s presentation on gaming in FL education, and to your keynote. We heard student after student say that the way to get them to learn is to make learning compelling, as if the job of educators is to give and of students to take. That isn’t shouldn’t be how it works. Learning is messy; learning is hard; sometimes learning is no fun. I’m not arguing that educators should do -nothing- to make learning interesting, but that if students don’t have some kind of internal motivation for learning a foreign language…then maybe they shouldn’t be learning it in the first place.

  4. Language Lab Unleashed! - it’s not your middle school language lab… » Motivation, Social Mapping, and more · March 26, 2007 Reply

    […] More on tonight’s speaker: I enjoyed Stacy’s talk, both content and form. But – as I’ve said before – I am unclear as to why an organization which caters to smaller colleges continues to bring in keynote speakers from large university settings. I’m not trying to be a separatist … I think small college technologists have a lot to learn from our colleagues at large multi-campus research universities, and vice versa. And maybe it’s not a pattern – maybe it’s just coincidence. However, I’m also on the planning committee for an upcoming event, and we’re in the early stages, but the only suggestion so far for a keynote speaker has been someone from a large research university. […]

Leave a Reply

Your email address will not be published. Required fields are marked *

css.php