It is the beginning of the second semester where I teach and with that, more language placement tests to be administered. We have a few new students transferring in, and several students returning from either a semester abroad or an intensive Winter Term language experience this time around… and we use our online placement tests to try and find an appropriate landing spot for each of them. Our system works for us, but there are some drawbacks, and I find myself wondering whether we could be doing this better.
I will be the first to say there is no one perfect language placement test system out there, so LLULive #16 won’t be a pursuit of the Holy Grail (although I may ask you what your favorite color is).
But what I am interested in talking about is how others use placement tests in the placement process. I haz questions: What can placement tests measure? What can’t they measure? Do we chose tests because they are easy to deploy over large numbers of people? Who (or what) scores the tests? What do our tests tell our students about what matters in language learning at our schools?
Please join the conversation this Thursday, February 6th at 4 p.m. EST (2100 GMT).
We will be using Appear.In as our platform. To join simply visit the LLU Live video conversation link in Chrome, Firefox, or Opera. There’s no software installation or account needed, although you may need to give your browser permission to access your camera and microphone.
LLU Live is informal, collaborative, and not a webinar. We won’t be recording this, and the info that is shared during these 20-25 minute get-togethers will be posted as a follow up (to this post) after Thursday.
Hope to see you Thursday!
Rabbit by Christy Presler from The Noun Project
UPDATE: Here are some of the resources we discussed last Thursday. If there are others not mentioned here, please add them in the comment section!
The WebCAPE Exam by PerpetualWorks: this exam is used by hundreds of schools. On the pro side, the WebCAPE is available for a wide variety of languages. The test is also adaptive, that is, the questions that are delivered are based upon the student’s previous answers. The cost is relatively low per person ($3/test).The cons to this exam are that the questions are suppressed, that is, it is not possible to see what a student got right or wrong when evaluating the results. The test is entirely m/c, or as one commenter said it is a “behaviorist” format, so for the expert test takers in our midst this could be an advantage. Calibrating the test to one’s curriculum is also a bit of a guessing game (since you never see the questions asked) but once adjusted it seems to be consistent. Another comment from our cnversation: the WebCAPE is not good for measuring abilities in the low intermediate range.
SLUPE is a new addition to the placement test line-up. Create by St Louis State University SLUPE is free and open source and currently being evaluated (according to their website) by several hundred college and universities.
We also talked about the tools made available by CLEAR at Michigan State. Our thoughts were that these tools are a great step forward but they sometimes feel a bit clunky… would that there were more funding available for NFLRCs like CLEAR to keep the production and support of these tools moving forward!
Also mentioned in our conversation: Decision Desk: Ryan brought this one up and mentioned that it could be a potential model for re-thinking how placement exams are done. Instead of having students take online multiple-choice based tests, something like DD (which is used for our Conservatory of Music Admissions auditions, among other things) could be used to capture live speaking samples (and/or live writing samples) from test takers. We all agreed that this is a piece that is routinely missing from placement exams.
Know of something we missed, or have a comment about any of the above? Please add your thoughts below!