In my last post, Dan Meyer and I discussed whether having students make and test their own conjectures can lead to poor long-term content learning. I think it easily can, not because of poor teaching but because of humans’ limited working memory. Dan’s reply captured the inquiry-learning perspective perfectly, and I made a first pass at replying in the comments but promised to reply with more later. Here goes:
Conceptual Understanding is Fragile
If calculating LCMs were my highest goal here, I would turn to other strategies, including lecture and definition. But calculating LCMs is secondary to conjecturing and testing your conjectures. That’s the higher goal here.
Can you tell me what help you see direct instruction offering me there?
In short, Dan seems to be saying that lecture is great for teaching computational skills, but that he’s willing to sacrifice some efficiency in computational learning in order to develop students’ reasoning skills (e.g., testing conjectures). That’s great, but it skirts the question of improving conceptual understanding, which is a totally separate dimension. Even students with good reasoning skills (the habits of mind that lead to productive inquiry) and strong computational fluency can have poor conceptual understanding. This happens when they regularly go through the instructional sequence Dan lays out: inquiry until students make the desired discovery, followed by notes and drill practice on the skill. The assumption underlying this approach is that once students have made a discovery for themselves, they understand it deeply enough to move on to application and drill practice–that having discovered a concept naturally leads to having a strong, durable conceptual understanding of it.
In reality, conceptual understanding is fragile. Students need practice retrieving the reasons for their conclusions in different contexts to establish them in long-term memory and get them connected to related conceptual schema that are already there. The mere fact of having made a discovery doesn’t guarantee I’ll remember the reasons for it tomorrow, nor that I’ll think to transfer that understanding to related situations. I still need lots of practice explaining “why?” and “how would it be different if…”, as well as “would the same pattern apply in this situation?” and “how could you represent that another way”.
Too often in inquiry lessons (including my own), this practicing of the reasons is relegated to the whole-class debrief in which small groups describe their thinking while sharing out from the investigation. I know this is an attempt to provide conceptual practice, but it’s nowhere close to what’s needed. For any given “how would it be different if…” question, at least 50% of students probably don’t understand, but because they successfully made the desired discovery, they (and the teacher, often including me) accept it. How much formative assessment do teachers do in this debrief phase? If incorrect reasons pop up in the discussion, do they just let another student speak up to correct the record, or do they stop and reteach that reason to mastery?
In short, the discovery is not the lesson. It’s just the set-up for the real lesson, which is when we rehearse the reasons for what we’ve concluded and how it’s connected to everything else we know.
Dan’s Challenge: How Can Direct Instruction Teach Conjecture-Making?
To go back to Dan’s comment,
But calculating LCMs is secondary to conjecturing and testing your conjectures. That’s the higher goal here…Can you tell me what help you see direct instruction offering me there?
My short answer is that since cognitive load is the issue, you’d want to instruct students directly on any techniques they could use to reduce their own cognitive load: making organized lists, searching through the problem space in an organized way, etc…the sort of things we encourage students to do anyway.
Secondly, I’d say that direct instruction of conceptual understanding (the sort I referred to as the “real lesson” above) probably helps students make and test conjectures. Better understanding leads them to ask better questions during the inquiry phase. I haven’t done a lot of reading on this, but one paper I remember from grad school shows that people with poor content knowledge tend to ask shallower or less relevant questions. Here’s the abstract:
Questions should emerge when a person studies a device (e.g., a lock) and encounters a breakdown scenario (“the key turns but the bolt doesn’t move”). Participants read illustrated texts and breakdown scenarios, with instructions to ask questions or think aloud. Participants subsequently completed a device-comprehension test, and tests of cognitive ability and personality. Deep comprehenders did not ask more questions, but did generate a higher proportion of good questions about plausible faults that explained the breakdowns. An excellent litmus test of deep comprehension is the quality of questions asked when confronted with breakdown scenarios.
I’m sure there’s better research out there on this topic, but the main idea would basically be that students at a low Van Hiele level would be inhibited from making good conjectures by being literally unable to perceive the deep features of the scenario. Wouldn’t those students develop their conjecture-making and testing abilities more if they had better conceptual understanding?
Please note that I’m not an advocate for direct instruction, just for attending to what students are really thinking, even when they appear to have made the right discovery. If I’m an advocate for anything, it’s for paying attention to cognitive factors in learning, not because they’re more important than motivational ones, but because I think the MTBoS community sometimes gives them short shrift. So cognitive load is my thing, much more than direct instruction.
Have They Encoded the Wrong Rule?
See Dan’s comment in my previous post for the context, but no, I don’t think they have. When you learn the rule that for numbers like 2 and 10 (in which one is a multiple of the other), the LCM is just the larger number, you’re actually learning two separate facts: the rule, and when the rule works. Learning the first and not yet knowing the second doesn’t mean you have the rule wrong, it means you’re ready to make your next discovery.