Given the richness of the Math Twitterblogosphere, it’s pretty hard to share something new that makes a substantial contribution to our online community. I think I have something worth sharing here. It answers this question: how can I adaptively target content to students’ needs without restricting assessment items to the lame formats — like multiple choice — that computers are able to read? In other words, how can I get (most of) the advantages of adaptive learning systems without all the drawbacks?
A big advantage with meatsacks [read: human teachers] over computers is the ability of a human to look at the work. Computers can only indirectly evaluate where the student went wrong; they can only look at the shadow on the ground to tell where the flyball is going. Meatsacks can evaluate directly where the student is going awry.
And yet computers do have an advantage: it’s very easy for them to keep track of what each student needs to work on and to deliver practice or assessment that’s targeted to those needs. Can we have the best of both worlds?
I’ve created a system that can make a unique printable mini-quiz for each student, depending on what skill they need to be assessed on. It draws on an item bank, categorized by skill, that can be as large as you want so questions won’t be repeated on successive retakes. Quizzes also print in order by students’ position in the seating chart, so you can simply walk down each row and breezily hand a personalized quiz to each student. (Not every quiz should be personalized, though. At least half the time, I pick the topic and everyone gets the same quiz. Personalized quizzes are for efficient retakes.)
The system is free, of course, and fully editable by anyone who knows how to work a spreadsheet. Here’s how it works. Each video is only a few seconds.
Step 1: Students select the skill they want to be quizzed on.
Step 2: You display students’ current choices on-screen. The screen updates live, so students who change their minds can see their most recent selection.
Step 3: You just copy and paste the Google Form responses into the quiz generator.
Step 4: With a simple CTRL + P, you print the entire class set. It automatically prints in order by seating chart.
Step 5: Updating your seating chart is easy. Changes to the seating chart automatically update the printing order of the quizzes.
the files you need
- Quiz generator spreadsheet program
- FULL version of quiz generator (with all the hidden tabs of the spreadsheet displayed, so you can see under the hood)
- Google Form responses spreadsheet program
- Seating chart program
If this post gets decent page views, I’ll come back in and write some tech support pieces to explain how to use all the features: how to add assessment items with images (it’s not trivial to add images into a cell of a spreadsheet); how to link up the spreadsheets correctly; and how to toggle all the various options in the program.
is this not overkill?
I’m pretty sure it’s not. Let me nail my 95 theses to this door here and see what you think. Here goes:
- Students should not grade their own formative assessments. An expert needs to grade them.
- That expert should be a human, not a computer, for reasons given above.
- With my current grading and prep load, I’m already maxed out on how much grading I can do. I can’t check huge numbers of ungraded formative assessments in addition to grading the tests/quizzes I already give.
- Therefore, formative assessments must replace some of my existing grading load, not add to it. They have to count in the gradebook.
- But if they’re graded, they won’t really be formative unless students can do retakes and earn credit for improving.
[My conclusion] Formative assessments must be graded tests or quizzes that students can retake.
- Should they be tests, or should they be quizzes? Many teachers use a formative assessment system with tests. Here’s Dan Meyer’s version. Let’s think about that. (If you think Dan’s is not the best example, let me know in the comments. I don’t want a straw-ish man here.)
- Advantage: Tests can be comprehensive. Each test can assess the full range of skills covered so far.
- Big disadvantage: Tests aren’t very frequent. Ideally, students would be able to relearn something and then earn credit for demonstrating proficiency within a couple of days, instead of waiting for the next test.
- Of course, you could have a policy that students may always come in informally outside of class to demonstrate mastery, but many teachers find that students don’t really bother to come after school to do that. In fact, I think if all the students who should come really did, it would overwhelm my ability to informally generate assessments after school.
- Here’s a bureaucratic reason that tests might be the wrong vehicle for formative assessments: in many districts, teachers don’t have control over the tests they give. There tends to be more flexibility and independence around teacher-generated quizzes.
- Okay, let’s consider using quizzes as formative assessments. Advantages: they’re more frequent, and you can still use your district’s tests. But lots of disadvantages, too.
- Advantage: Shorter, more frequent assessments are better for learning. Or so says Marzano.
- Big disadvantage: How will retakes work? If you have 10 skills this quarter, and you quiz a different skill each day, a student might need to wait up to 10 class days for the chance to retake the skill they’re ready to re-do. That’s unacceptably long.
- Logistical disadvantage: Even though frequent assessment is good for learning, how can I squeeze quizzes into the last 10 minutes of class consistently without losing too much instructional time? These quizzes need to be very quick to hand out (and to pass back, once they’re graded).
- Solution: you need a way to let students pick the quiz topic they want to retake, so that on some days, different students can take different quizzes. If this happens frequently, students can relearn and reassess in a tight loop lasting no more than a few days.
- Logistical problems:
- Imagine laying out 10 quizzes on the counter, or on your teacher desk, and inviting each row of students come up and pick a quiz. If these quizzes are short (4 questions or so), the first students may be done by the time the last students have picked their quiz.
- Entering grades in the gradebook is a challenge. Try typing in 120 grades into the gradebook, in up to 10 different columns, while overwriting old grades (with a grading program that has no “undo” button), without making a single mistake. Not easy!
- In addition, managing answer keys is a huge problem here. Try grading 5 class sets of quizzes in 10 minutes per set, when you need to make 10 different answer keys and then flip between those 10 keys to check students’ quizzes.
- So maybe my assessment tool is not overkill after all.
- Even if I want to assess a single skill, I can toggle an option to print out 2, 3, 4, or more different versions of the quiz, to reduce opportunities for cheating.
- There’s a tool to help organize grade entry.
- The tool to manage answer keys isn’t built yet, but in principle it’s not hard. It should be possible to bring up each student’s answer key on-screen simply by typing in the student’s 2-digit class ID # (e.g., “23”, representing the student in seat #23) printed at the top of the quiz. That would cut down on grading time. Of course, it would mean answer keys and rubrics would have to be written for each item in the item bank ahead of time, which is why this feature hasn’t been built yet.
- Here’s how I handle passing back daily quizzes quickly: students turn their papers into a tray specific to their seating area (left, middle, or right). When I grade the papers, I keep them grouped like that. Then when I want to pass them back, they’re already grouped by seating area, and I’m not traversing the room 10 times to pass them all back. I can pass back a class set in 1 minute.
does this fix the real problem?
The root problem is that it’s hard to get kids to take the initiative and fill in their own skill gaps, even when you identify them. Here’s Michael Pershan, over at his blog
Second, I don’t think the feedback itself given in SBG [Standards-Based Grading] is helpful to kids. What’s the path from “You’re a beginner at solving linear equations” to actually learning to solve a linear equation? Some say that kids will go home and study linear equations more if you tell them they’re bad at them, which doesn’t fit with what I know about high school students. But maybe your kids are different than mine.
Not only do I agree with Michael here…I also designed this entire project as a response to his critique and Dan Meyer’s larger criticism of adaptive learning systems.
Here’s why, in my classroom, the system I’m presenting seems to avert the pitfall Michael’s pointing to. After letting students choose their retake skill on the Google Form, I let students go to different stations with study guides for their chosen skills. There’s something about signing up for a skill’s retake, and then immediately diving into that skill’s study guide (starting with circling the ones you got wrong last time) that seems to lead students to feel there’s a point in trying to relearn the skill. That what’s being asked of them is a manageable bite.
And I don’t mind making everyone do a retake, even those who had 100’s on everything. Short, frequent quizzes are good, thanks to the testing effect.
Cri de coeur
I’ve never taken a coding class. Millions of people out there could have done this better than I did. But even if I felt like waiting a few more years for a good formative assessment solution, I don’t even see one on the horizon. So I made my own. In the last 3 years, I’ve created this quiz generator, written all the quiz items (most of which I’m not publishing here for test security), made the Khan Academy grading tool in the previous blog post, and tried to rewrite as many lessons as possible to make them better. When do I get to focus on just teaching, being a dad, and being me? The world truly has no idea how much teachers do.
Meanwhile, it’s not really my dream that lots of other teachers start to use this. My dream would be for assessment companies, like MasteryConnect, to include these features in their own programs, so doofuses like me didn’t have to build their own quiz generator (and so teachers had a convenient platform for sharing quiz questions instead of writing them all from scratch). But almost every edtech company out there is pushing for everything to be done online. A paper-based assessment system with human graders just isn’t that interesting to them.
*Note about the title of this post: if you know me, you know I’ve worked hard to find a way to make Khan Academy a useful tool for my Algebra 1 students. So I only want to burn the computer when it comes to real assessments. As a practice tool, computerized exercises are fine with me.