## Teaching linear functions for meaning

For years I’ve been rearranging the pieces of my linear functions unit like a jigsaw puzzle, trying to optimize comprehension for weaker students.  Weaker students see math as a giant bag of disconnected steps to memorize, right?  Changing that can require a cultural shift in the classroom that I’m not usually able to pull off.  It’s not that student engagement is so hard — there are lots of tasks that kids get excited about.  But while those tasks might motivate kids to learn something like slope, they don’t always help kids internalize what slope really means.

And even if you can give them an aha! moment today, it may be lost by tomorrow.  In fact, it probably will be.  Once you introduce the slope formula, slope becomes that formula.  It barely even matters if today’s lesson created a nice footpath in students’ brains between “slope” and the change in one quantity per unit of change in another.  Once that formula comes out, your measly footpath is no competition for the 8-lane highway that’s opened up between “slope” and (y2–y1)/(x-x1).

If the intuitive meaning is going to compete at all, you’re going to have to write a lot of drill and assessment questions that force students to traverse that footpath over and over until students notice what nice scenery it has.

Here’s a screenshot of the key question type I’ve developed to make that happen:

• You can visually interpret why m = y/x doesn’t work when the y-intercept isn’t zero.  In the top picture, y/x would be 105/2=52.5 grams.  Does each M&M weigh 52.5 g?  No, because 52.5 g would represent 1 M&M plus half the fro-yo.  On the other hand, if you just had a bunch of candies on the scale without any fro-yo, y/x would make perfect sense.
• This question imposes a time cost to missing the conceptual point, but still allows students to get the question right if you don’t get the concept.  If you really can’t tell whether something’s direct variation, you can always divide y/x for both examples and see if you get the same answer both times.  (Sadly, that’s mainly what my state wants kids to learn about direct variation: that given a table of values, you should divide y/x and see if you get the same answer for all the ordered pairs).  There will always be a few students who need to do this.  But it’s much faster to notice visually that this example is not direct variation because there’s a non-zero y-intercept: the fro-yo.  So I’m teaching what the state wants me to teach, but allowing students to use comprehension as a shortcut if they can see it.
• You can even visually interpret why non-direct variation scenarios give you different answers to  y/x.  In the top picture, 105/2 represents 1 M&M and half the fro-yo.  In the bottom picture, 108/6 represents 1 M&M and just one-sixth of the fro-yo.  Kids can see that it should be a smaller answer.
• Repeated practice with this question type causes students to associate m=(y2-y1)/(x-x1) and y=mx+b with each other, and m=y/x and y=mx with each other.  I want that association.

Soooooo, drumroll please, I now present my new linear functions unit outline:

• Linear dot patterns.
• The focus is on noticing which part of the pattern is repeating, and which part is staying the same, and what to do with that information.  We don’t use the words “slope” and “y-intercept” yet.
• Note: some dot patterns should be direct variation. Direct variation patterns can even be tricky by having 2 parts to the pattern that both repeat:
• Linear story problems. These are your usual algebra story problems: a tree was 4 ft tall when it was planted, and it grows at a rate of 1.5 ft per year.  Students have to interpret key words that indicate initial value or rate of change.  Here we use y=mx+b, but not the words “slope” and “y-intercept”.  Instead, students use their own words to describe what the m and b mean.
• Linear graphing stories.  I lead off with some of Dan Meyer’s graphing stories (Kenneth Lawler’s bench-press and Adam Poetzel’s Height of Waist off Ground), focusing on how the starting value shows up on the graph as the y-intercept.  Then we do my own chubby bunny lesson (video below) which is more geared toward slope-intercept form.  Now that rate is showing up as steepness on a graph, and the starting amount is showing up on the y-axis, it’s okay to start calling them the “slope” and b the “y-intercept”.
• Slope from 2 points, conceptually, which explores the concepts shown in the fro-yo question above.  Here is my slope-from-2-points lesson.  I do it as a Pear Deck lesson now, but at some point will probably convert it over to Desmos now that Desmos has the classroom conversation toolkit.   After this I start giving questions like the fro-yo assessment question.
• Identifying proportional scenarios.  Given a scenario, can students identify it as a proportional or non-proportional situation?
• Here’s a screenshot of this question type:

The idea here is to combine “Graphing stories” with “Slope from 2 points, conceptually.”   The multiple choice graphing question scaffolds kids’ thinking, but it’s not just a crutch: it also improves learning by signaling to kids that the most important thing is to use common sense to tell whether the y-intercept would be zero or not.   So in the question above, would a pizza with zero toppings cost \$0.00 ?  Compare this to the current Khan Academy exercise on identifying proportional situations:

• Slope-intercept form formalism, including the all the goodies we need kids to know: graphing lines given in slope-intercept form, applying the formula for slope to random pairs of points, etc.  My kids need lots of focus on distinguishing between y=2x and y=2+x, and also y=2 or x=2.  Yours, too, right?  The formalism can also include some more advanced work on slope and direct variation.

I’m very hopeful that at each phase of this unit outline,  I’ll be able to ask quiz questions that check real comprehension of the meaning.  And for a certain type of kid, if it ain’t on the quiz (and the quiz after that, and…), then you never really taught it.

**Yes, that quiz question is part of my new adaptive paper-based quiz generator.

## Burn the computer: personalized assessment on paper

How can I adaptively target content to students’ needs without restricting assessment items to the lame formats — like multiple choice — that computers are able to read?

Given the richness of the Math Twitterblogosphere, it’s pretty hard to share something new that makes a substantial contribution to our online community.  I think I have something worth sharing here.  It answers the question above.   In other words, it’s a way to get (most of) the advantages of adaptive learning systems without all the drawbacks.

Dan Meyer has chronicled those drawbacks in many blog posts, for example, this one.  I like the way commenter Dan Anderson summed up the limitations of letting a computer assess student work:

A big advantage with meatsacks [read: human teachers] over computers is the ability of a human to look at the work. Computers can only indirectly evaluate where the student went wrong; they can only look at the shadow on the ground to tell where the flyball is going. Meatsacks can evaluate directly where the student is going awry.

And yet computers do have an advantage: it’s very easy for them to keep track of what each student needs to work on and to deliver practice or assessment that’s targeted to those needs.  Can we have the best of both worlds?

I’ve created a system that can make a unique printable mini-quiz for each student, depending on what skill they need to be assessed on.  It draws on an item bank, categorized by skill, that can be as large as you want so questions won’t be repeated on successive retakes.  Quizzes also print in order by students’ position in the seating chart, so you can simply walk down each row and breezily hand each student a personalized quiz.  (Not every quiz should be personalized, though.  At least half the time, I pick the topic and everyone gets the same quiz.  Personalized quizzes are for efficient retakes.)

The system is free, of course, and fully editable by anyone who knows how to work a spreadsheet.  Here’s how it works.  Each video is only a few seconds.

## tl;dr

Step 1: Students select the skill they want to be quizzed on.

Step 2: You display students’ current choices on-screen.  The screen updates live, so students who change their minds can see their most recent selection.

Step 3: You just copy and paste the Google Form responses into the quiz generator.

Step 4: With a simple CTRL + P, you print the entire class set.  It automatically prints in order by seating chart.

Step 5: Updating your seating chart is easy.  Changes to the seating chart automatically update the printing order of the quizzes.

Step 6: On the next quiz, increase the “quiz generator key” by 2.  This will change the questions given for each skill.

Step 7: Grading tool.  This speeds up your grading process by more than a factor of 10. Duuuuude.  A factor of 10.  (Turn the volume on to listen to this screencast).

## the files you need

If this post gets decent page views, I’ll come back in and write some tech support pieces to explain how to use all the features: how to add assessment items with images (it’s not trivial to add images into a cell of a spreadsheet); how to link up the spreadsheets correctly; and how to toggle all the various options in the program.

## is this not overkill?

I’m pretty sure it’s not.  Let me just nail my 95 theses to this door here and see what you think.  Here goes:

1. Students should not grade their own formative assessments. An expert needs to grade them.
2. That expert should be a human, not a computer, for reasons given above.
4. Therefore, formative assessments must replace some of my existing grading load, not add to it.  They have to count in the gradebook.
5. But if they’re graded, they won’t really be formative unless students can do retakes and earn credit for improving.
• [My conclusion]  Formative assessments must be graded tests or quizzes that students can retake.

6. Should they be tests, or should they be quizzes? Many teachers use a formative assessment system with tests.  Here’s Dan Meyer’s version.  Let’s think about that. (If you think Dan’s is not the best example, let me know in the comments.  I don’t want a straw-ish man here.)
• Advantage: Tests can be comprehensive. Each test can assess the full range of skills covered so far.
• Big disadvantage: Tests aren’t very frequent. Ideally, students would be able to relearn something and then earn credit for demonstrating proficiency within a couple of days, instead of waiting for the next test.
• Of course, you could have a policy that students may always come in informally outside of class to demonstrate mastery, but many teachers find that students don’t really bother to come after school to do that.  In fact, I think if all the students who should come really did, it would overwhelm my ability to informally generate assessments after school.
• Here’s a bureaucratic reason that tests might be the wrong vehicle for formative assessments: in many districts, teachers don’t have control over the tests they give.  There tends to be more flexibility and independence around teacher-generated quizzes.
7. Okay, let’s consider using quizzes as formative assessments.  Advantages: they’re more frequent, and you can still use your district’s tests.   But lots of disadvantages, too.
• Advantage: Shorter, more frequent assessments are better for learning.  Or so says Marzano.
• Big disadvantage: How will retakes work?   If you have 10 skills this quarter, and you quiz a different skill each day, a student might need to wait up to 10 class days for the chance to retake the skill they’re ready to re-do. That’s unacceptably long.
• Logistical disadvantage:  Even though frequent assessment is good for learning, how can I squeeze quizzes into the last 10 minutes of class consistently without losing too much instructional time?  These quizzes need to be very quick to hand out (and to pass back, once they’re graded).
• Solution: you need a way to let students pick the quiz topic they want to retake, so that on some days, different students can take different quizzes. If this happens frequently, students can relearn and reassess in a tight loop lasting no more than a few days.
• Logistical problems:
• Imagine laying out 10 stacks of quizzes on the counter, or on your teacher desk, and inviting each row of students come up and pick a quiz. If these quizzes are short (4 questions or so), the first students may be done by the time the last students have picked their quiz.
• Entering grades in the gradebook is a challenge. Try typing 120 grades into the gradebook, in up to 10 different columns, while overwriting old grades (with a grading program that has no “undo” button), without making a single mistake.  Not easy!
• In addition, managing answer keys is a huge problem here. Try grading 5 class sets of quizzes in 10 minutes per set, when you need to make 10 different answer keys and then flip between those 10 keys to check students’ quizzes.
8. So maybe my assessment tool is not overkill after all.
• Even if I want to assess a single skill, I can toggle an option to print out 2, 3, 4, or more different versions of the quiz, to reduce opportunities for cheating.
• There’s a tool to help organize grade entry.
• The tool to manage answer keys when grading a class set has not has been created.  See screencast 7 above.
• Here’s how I handle passing back daily quizzes quickly: students turn their papers into a tray specific to their seating area (left, middle, or right).  When I grade the papers, I keep them grouped like that.  Then when I want to pass them back, they’re already grouped by seating area, and I’m not traversing the room 10 times to pass them all back.  I can pass back a class set in 1 minute.

## does this fix the real problem?

The root problem is that it’s hard to get kids to take the initiative and fill in their own skill gaps, even when you identify them.  Here’s Michael Pershan, over at his blog

Second, I don’t think the feedback itself given in SBG [Standards-Based Grading] is helpful to kids. What’s the path from “You’re a beginner at solving linear equations” to actually learning to solve a linear equation? Some say that kids will go home and study linear equations more if you tell them they’re bad at them, which doesn’t fit with what I know about high school students. But maybe your kids are different than mine.

Not only do I agree with Michael here…I also designed this entire project as a response to his critique and Dan Meyer’s larger criticism of adaptive learning systems.

Here’s why, in my classroom, the system I’m presenting seems to avert the pitfall Michael’s pointing to.  After letting students choose their retake skill on the Google Form, I let students go to different stations with study guides for their chosen skills.  There’s something about signing up for a skill’s retake, and then immediately diving into that skill’s study guide (starting with circling the ones you got wrong last time) that seems to lead students to feel there’s a point in trying to relearn the skill.  That what’s being asked of them is a manageable bite.

And I don’t mind making everyone do a retake, even those who had 100’s on everything.  Short, frequent quizzes are good, thanks to the testing effect.

## Cri de coeur

I’ve never taken a coding class.  Millions of people out there could have done this better than I did.  But even if I felt like waiting a few more years for a good formative assessment solution, I don’t even see one on the horizon.  So I made my own.  In the last 3 years, I’ve created this quiz generator, written all the quiz items (most of which I’m not publishing here for test security), made the Khan Academy grading tool in the previous blog post, and tried to rewrite as many lessons as possible to make them better.  That’s a lot of time spent on tools and resources.  As a teacher I’d prefer my extra time be spent on the kids rather than the tools.

Relatedly, it’s not really my dream that lots of other teachers start to use this program.  My dream would be for assessment companies like MasteryConnect to include these features in their own programs so doofuses like me didn’t have to build their own quiz generator (and so teachers had a convenient platform for sharing quiz questions instead of writing them all from scratch).  But almost every edtech company out there is pushing for everything to be done online.  A paper-based assessment system with human graders just isn’t that interesting to them.

*Note about the title of this post: if you know me, you know I’ve worked hard to find a way to make Khan Academy a useful tool for my Algebra 1 students.  So I only want to burn the computer when it comes to real assessments.  As a practice tool, computerized exercises are fine with me.

I hope this is my last post on Khan Academy for a while.  It’s not that central to my teaching (I’d rather be writing about Desmos or something).  But I do think the tool I’ve designed to differentiate grading with Khan Academy may be useful to some folks out there.

I’ll link you to the screencasts for how it works, and then to a Google link for the actual spreadsheet file in Google Drive, but here’s the gist:

• I don’t let Khan Academy automatically recommend exercises for my students to practice.  I want to be in charge of selecting what kids work on.
• Khan’s main value is its memory quizzes, called “Mastery Challenges”, that check if a student has forgotten something we’ve learned (if they’ve forgotten a skill, it gets added back onto the student’s agenda).
• But different students need to be held to different standards of retention and accuracy, per IEP’s and observations.
• My new spreadsheet allows me to exempt some students entirely from these memory quizzes, and allows other students to earn full credit with reduced expectations of retention & accuracy.  Meanwhile, most students are still held to the full standard.
• In addition, students can be exempted from the hardest exercises on an assignment.
• On the opposite end, students who are really advanced can go ahead and earn extra credit by working on Khan Academy’s automatically recommended skills…but only after they have completed the assigned skills.
• Here are the screencasts for my new grading tool. One thing to know: this year’s improvements have made it a very easy to system to maintain.
• This spreadsheet explains how the teacher-facing grading system works.  The student-facing end is different.  It’s a technique for hacking around Khan Academy’s automatic recommendations and instead forcing kids to do the exercises you want them to do.  You can find a description here at this blog post.  It’s different than using Khan’s “Teacher recommendation” tool.  That tool does not re-add a skill you’ve recommended when the student fails the retention quiz on the skill.  So if your goal in using Khan Academy is to focus on retention, their “teacher recommendation” tool is useless.
• Khan Academy’s content started weak, and some of you may not feel it’s ready for your use yet.  Depends on the course.  Apparently, AB and BC Calc were rewritten this summer, though I haven’t checked them out.  Algebra 1 is currently being rewritten, but those changes have not gone live yet.  In my spare time, I work as a volunteer to help them identify Algebra 1 improvements that need to be made.  There are many.  In August, I created a 30-page document suggested changes for about 25% of the course.  We’ll see how many of my suggestions they take.

Future work: How can we add a feature that automatically pairs students up so each member of the pair has an assigned skill they’re able to teach the other?  All the required data is there in the spreadsheet, but I can’t figure out an algorithm that makes it work.

## Creating Intellectual Need for Multiplying Binomials

I’ve always needed a way to motivate the study of quadratics.  In the past, I’ve used materials from some of Dan Meyer’s 3-Acts: Super Mario to get students to realize that linear predictions are sometimes wrong, and Will It Hit the Hoop? to specifically focus students on quadratic graphs.  But even to my teacher ears, the jump to actual quadratics skills sounded cheap: “Now that we all agree quadratic functions are important, let me teach you to multiply things like (x+1)(x+2), because it’s really important for understanding parabolas, and I’ll explain why later.”  Groan.

I’d like to share a new lesson that I really liked because it:

• Naturally focuses students on area models of quadratic expressions;
• Shows that quadratics are the way to model something that’s speeding up or slowing down;
• Has a really low barrier to entry.

A low barrier to entry means students can dabble their toes in this concept pretty easily at the start, without encountering hard math until they’ve played around a bit.  Before we go on, let’s check that this blog post is worth your time.  Here is the whole lesson I’m about to describe, fast-forwarded to be just 2 min long:

Still interested?  Cool.

The most direct way to (a) introduce area models of quadratic expressions, and (b) make it seem like quadratic expressions are useful is to pose a question that’s directly related to area.  Something like: Farmer Joe has 100 feet of fence and wants to make the largest sheep pen he can.  What length and width should he use for the pen?   [The answer is to model area as A = (L)(W) = (L)(100-2L) = 100L – 2L2 , graph the quadratic function, and find its vertex].

In my experience, the Farmer Joe question doesn’t arouse much natural curiosity from students, and I think I know why: even students who naturally enjoy math puzzles have no inkling at the outset of their inquiry that their solution method will also help them understand the many faces of quadratics: projectiles, cars speeding up or slowing down, the famous handshake problem, etc.  It’s not until you’re well into the problem, and you see that the graph of area vs length looks like the flight path of a projectile, that you have a chance of recognizing how significant quadratics might be.  And by that point, you’ve already done enough hard math that you might be a bit tired or grumpy.  Learning quadratics should be like hiking to a beautiful vista: look at all the things I can see from up here!  The ahhhh experience of arriving at that vista needs to come sooner in the introduction or students end up feeling the way I felt on my last hiking trip in Montana: that’s a great view, but OMG I hate mosquitos–let’s get the #@%! out of here.

If you’re learning quadratics after learning linear functions, then the best way to notice you’re at a pretty awesome vista is to see that you’re looking at a pattern that’s accelerating.  A pattern that’s accelerating is noticeably very different than all the patterns we’ve done so far.  My class starts linear function  by looking at dot patterns like Fawn’s–specifically, we focus on ones that visually distinguish the y-intercept from the slope.  For example, looking at the pattern below, how many dots would be in Stage 10?  Stage x?

Students get really used to asking, “How fast is the pattern growing?” or “How many dots does it add each stage?”  We also do modified versions of Stacking Cups and Barbie Bungee to keep emphasizing that finding the rate is crucial for making a prediction.

In addition, the narrative in my room is that algebra is a way to predict the future by finding and expressing patterns.  For example, when we study direct varation early in the year, students actually make short videos of a prediction experiment in their own lives.

Okay, against that backdrop, I present students with the following lesson to try a prediction that finally breaks the constraint of using constant-rate patterns and motivates area models for polynomial multiplication.  Here’s the full, narrated video overview of the lesson.

Update 6/21/16: Here’s a Desmos activity to go with the visual “dot pattern” section at the end.

Room for improvement:  As I was transitioning students to (x+2)(x+3) and drill problems, I felt that even though I’d gotten students to the vista, I need to do a better job of showing them everything they can see.  What if they think these area patterns only work when the first difference in the pattern goes like +1, +3, +5, etc?  I should show that if the first differences go +2, +6, +10, etc, then you can use 2x2…visually, just draw two of the x2 patterns.  If you wanted +1, + 2, +3, you could use (1/2)x2  by drawing the x2 dot pattern and then cutting it in half.  I should also make the connection to accelerating cars, psychology’s inverted U-shaped graph of stress vs performance graph, Farmer Joe, and everything else that’s quadratic.   However, I think that’s best saved for the next lesson.  We teach roughly 90-minute blocks, and I like each block to have some conceptual development and some practice.  When you see kids every other day as it is, you need to squeeze in some practice to each lesson.  So in the future, we’ll transition to (x+2)(x+3) and do drill just as shown above, but the following lesson I’ll take time point out all the landmarks you can see from this vista.

CCSS thought: I’m not sure how this lesson would play in a Common Core state.  Do you do arithmetic series in Algebra 1, and if so, do you do them before quadratics?  That would probably make this whole shtick might seem kind of lame.  We don’t do CCSS here.  Our state test doesn’t really assess comprehension much, so I’m not sure how much this lesson will even improve my students’ standardized test scores. My students have always been able to multiply binomials without experiencing an intellectual need for doing so.  But this lesson just felt so satisfying.  I hope it’s been worth your time to read about it.

Sharing the file: I’m happy to share a copy of the powerpoint file to anyone who’d like it.  Just ask in the comments.

I want to share 2 tricks I have come up with for making Khan Academy a really great homework system.  The first trick is very simple, and I’ll describe it here.  The second involves a really complicated spreadsheet, but now that I’ve made it I think you should be able to start using it almost immediately.

The adaptive aspect of Khan Academy makes it almost unusable for me in the classroom.  Because the adaptive software picks students’ next exercise, what the system picks may have nothing to do with what I’m teaching this week in class.  Now, KA does have a way for teachers to add an exercise to students’ dashboards: you “recommend” an exercise to a student, and it shows up on top of their agenda like this:

But here’s the thing: the way this feature is implemented actually defeats the main advantage KA offers over traditional pencil-and-papeer homework.  What is that advantage?  While it’s terrible for teaching new concepts to students, Khan Academy is pretty great at detecting when they’ve forgotten something.  The system includes a built-in generator of adaptive quizzes (called “mastery challenges” in Khan parlance) that check whether a student still remembers something she may have learned a few months ago.

So here’s the problem with the teacher recommendation feature of Khan Academy: yes, it lets you add an exercise to the top of a student’s agenda — but once the student achieves that initial success, she no longer sees that exercise on her dashboard, even if she later shows that she has forgotten the skill and needs to re-do it.

Here’s a really simple trick for getting around this: first, have your students add their own usernames in their list of “coaches”.  Once they do this, you can post a link to a coach report that is filtered for just the exercises you want them to do.  For example, here is a link: http://bit.ly/1SzQw8F.  You will not be able to access the link unless you have a Khan Academy account and have at least 1 “student”; if you don’t have any students on KA, just add your own username as your coach, and you’ll be able to view the link.  I’ve found that Bitly is a good way to post the link because the length of the links overwhelms my school’s website hosting platform.  Students will click on that link and pull up a report that shows their progress on only those exercises.

All non-assigned exercises are filtered out, and the report updates (with a browser refresh) as soon as a Mastery Challenge changes the skill level in any exercise.

In my class, I post 3 links per week: 20-point exercises, 4-point ones, and 2-point ones.  There are usually about 4 exercises in the 20-point category per week.  These are new exercises, and they are the core that I need everyone to learn for the week.  The 4-point exercise link is cooler, from a teacher perspective, because it contains every 20-point exercise I’ve ever assigned the class.   If a Mastery Challenge shows that you have forgotten a skill, then that skill’s bar may turn gray on the coach report for 4-point exercises.  In that case, you’d need to go back and re-do the skill from scratch before trying to level up on it again.  That’s really where Khan Academy pays off: it has this great built-in detector of student retention and forgetting.  And, increasingly, it has high-quality practice on skills your students should already have learned through your lessons.

The 2-point exercises are challenging ones I’ve selected for ambitious students to try if they’re done with everything.  They’re related to what we’re learning in class but go beyond our expectations.  Students who complete the 2-point exercises can earn extra credit by working on exercises automatically recommended by Khan Academy on the student dashboard.

So that’s the simple trick.  In a later post, I’ll describe how to use the spreadsheet I’ve designed to assign points for different exercises based on the downloadable report in the top right corner of the “Student Progress” report on Khan Academy.  [That post is here.]  Perhaps I shouldn’t say this, but I do hope at some point some KA people actually read these ideas. There’s no reason why it should take so much hacking to expose (what I think is) their site’s main benefit to students.

11/9/15 Update: For those interested, one of the KA employees in charge of the Mastery Challenges system describes the way they work here.

10/25/16 Update: i significantly changed description of the point allocations (20 points, 4 points, 2 points) to match what I now do.  It’s been an improvement.  I also deleted the description of the 2-week cycle (Week A and Week B) of each assignment, because I now require students to go from grey to dark blue in a single week.

This is a follow-up to Dan Meyer’s twitter conversation a few days ago about cognitive load theory:
My thoughts: I suspect that the difference between germane and non-germane cognitive load can be detected on an fMRI machine.  You’d first need to see what parts of the brain light up when a student is thinking about something germane.  Then just check whether the activity in question makes those (germane) areas light up more, or whether it makes those light up only a little and instead mostly consumes the region of your brain that helps you interpret a cumbersome computer interface.
This kind of stuff is not that far-fetched.  For example, here is an artificial intelligence program using nothing but fMRI input to predict what algebraic steps a student is taking.  So not only is it determining whether the student is thinking about something germane, it’s actually identifying exactly what the student is thinking…and (here’s the kicker), often BEFORE the student has actually recorded those steps on the computer screen.  Basically: mindreading.
And for some context, here is the researcher’s descriptions of what the split screens represent in the video, and here is a link to the research project.

## Teaching for Understanding vs. Teaching for Reasoning Skills

In my last post, Dan Meyer and I discussed whether having students make and test their own conjectures can lead to poor long-term content learning.  I think it easily can, not because of poor teaching but because of humans’ limited working memory.  Dan’s reply captured the inquiry-learning perspective perfectly, and I made a first pass at replying in the comments but promised to reply with more later.  Here goes:

Conceptual Understanding is Fragile

Dan writes:

If calculating LCMs were my highest goal here, I would turn to other strategies, including lecture and definition. But calculating LCMs is secondary to conjecturing and testing your conjectures. That’s the higher goal here.

Can you tell me what help you see direct instruction offering me there?

In short, Dan seems to be saying that lecture is great for teaching computational skills, but that he’s willing to sacrifice some efficiency in computational learning in order to develop students’ reasoning skills (e.g., testing conjectures).  That’s great, but it skirts the question of improving conceptual understanding, which is a totally separate dimension.  Even students with good reasoning skills (the habits of mind that lead to productive inquiry) and strong computational fluency can have poor conceptual understanding.  This happens when they regularly go through the instructional sequence Dan lays out: inquiry until students make the desired discovery, followed by notes and drill practice on the skill.  The assumption underlying this approach is that once students have made a discovery for themselves, they understand it deeply enough to move on to application and drill practice–that having discovered a concept naturally leads to having a strong, durable conceptual understanding of it.

In reality, conceptual understanding is fragile.  Students need practice retrieving the reasons for their conclusions in different contexts to establish them in long-term memory and get them connected to related conceptual schema that are already there.  The mere fact of having made a discovery doesn’t guarantee I’ll remember the reasons for it tomorrow, nor that I’ll think to transfer that understanding to related situations.  I still need lots of practice explaining “why?” and “how would it be different if…”, as well as “would the same pattern apply in this situation?” and “how could you represent that another way”.

Too often in inquiry lessons (including my own), this practicing of the reasons is relegated to the whole-class debrief in which small groups describe their thinking while sharing out from the investigation.  I know this is an attempt to provide conceptual practice, but it’s nowhere close to what’s needed.  For any given “how would it be different if…” question, at least 50% of students probably don’t understand, but because they successfully made the desired discovery, they (and the teacher, often including me) accept it.   How much formative assessment do teachers do in this debrief phase?  If incorrect reasons pop up in the discussion, do they just let another student speak up to correct the record, or do they stop and reteach that reason to mastery?

In short, the discovery is not the lesson.  It’s just the set-up for the real lesson, which is when we rehearse the reasons for what we’ve concluded and how it’s connected to everything else we know.

Dan’s Challenge: How Can Direct Instruction Teach Conjecture-Making?

To go back to Dan’s comment,

But calculating LCMs is secondary to conjecturing and testing your conjectures. That’s the higher goal here…Can you tell me what help you see direct instruction offering me there?

My short answer is that since cognitive load is the issue, you’d want to instruct students directly on any techniques they could use to reduce their own cognitive load: making organized lists, searching through the problem space in an organized way, etc…the sort of things we encourage students to do anyway.

Secondly, I’d say that direct instruction of conceptual understanding (the sort I referred to as the “real lesson” above) probably helps students make and test conjectures.  Better understanding leads them to ask better questions during the inquiry phase.  I haven’t done a lot of reading on this, but one paper I remember from grad school shows that people with poor content knowledge tend to ask shallower or less relevant questions.  Here’s the abstract:

Questions should emerge when a person studies a device (e.g., a lock) and encounters a breakdown scenario (“the key turns but the bolt doesn’t move”). Participants read illustrated texts and breakdown scenarios, with instructions to ask questions or think aloud. Participants subsequently completed a device-comprehension test, and tests of cognitive ability and personality. Deep comprehenders did not ask more questions, but did generate a higher proportion of good questions about plausible faults that explained the breakdowns. An excellent litmus test of deep comprehension is the quality of questions asked when confronted with breakdown scenarios.

I’m sure there’s better research out there on this topic, but the main idea would basically be that students at a low Van Hiele level would be inhibited from making good conjectures by being literally unable to perceive the deep features of the scenario.  Wouldn’t those students develop their conjecture-making and testing abilities more if they had better conceptual understanding?

Please note that I’m not an advocate for direct instruction, just for attending to what students are really thinking, even when they appear to have made the right discovery.  If I’m an advocate for anything, it’s for paying attention to cognitive factors in learning, not because they’re more important than motivational ones, but because I think the MTBoS community sometimes gives them short shrift.  So cognitive load is my thing, much more than direct instruction.

Have They Encoded the Wrong Rule?

See Dan’s comment in my previous post for the context, but no, I don’t think they have.  When you learn the rule that for numbers like 2 and 10 (in which one is a multiple of the other), the LCM is just the larger number, you’re actually learning two separate facts: the rule, and when the rule works.  Learning the first and not yet knowing the second doesn’t mean you have the rule wrong, it means you’re ready to make your next discovery.