Questioning Gagné and Bloom’s Relevance
Several weeks ago, I had the pleasure of meeting Lauren Hirsh. Lauren needed to do an informational interview for her masters program, and I needed some new profile pictures. (The pictures turned out terrific; I’m sure I got the better end of the bargain.)
During the interview, Lauren asked some very thoughtful questions about the relationship between theory and practice.
I made this comment as part of the interview:
It’s easy to get caught up in theories without really looking at whether the research support is there. Gagne’s Nine Events of Instruction might be helpful as a designer, but they aren’t really supported; you can skip everything but practice with feedback without much change in results. Learning styles (like visual, auditory, kinesthetic) have much less effect on learning results than other factors, but we often focus on them heavily. Bloom didn’t have any research for his taxonomy, but I still find it useful for my own planning; I just don’t pretend there’s a research-based argument for classifying a verb as application instead of analysis.
As a follow-up question, she asked where I learned the above about Gagné and Bloom.
Gagné’s Nine Events
Besides criticisms like Gagne’s Nine Dull Commandments, the post that really made me rethink Gagné was Tom Werner’s Whatever You Do, Don’t Drop Practice (now only available as an archive via the Wayback Machine). That post summarized research on what happens when you remove elements of instruction.
From Tom’s summary:
The researchers were interested in which of some of Gagné’s nine events of instruction were most powerful in promoting learning: objectives, information, examples, practice with feedback, or review.
The researchers pretested 256 college students enrolled in a computer literacy course and divided them into low, medium, and high blocks on the basis of the pretest scores.
They then divided each block of students into six groups and randomly assigned each group to a different version of an instructional program:
- Full program (objectives + information + examples + practice with feedback + review).
- No objectives (
objectives+ information + examples + practice with feedback + review).
- No examples (objectives + information +
examples+ practice with feedback + review).
- No practice (objectives + information + examples +
practice with feedback+ review).
- No review (objectives + information + examples + practice with feedback +
- Information only (
objectives+ information + examples+ practice with feedback+ review).
In a nutshell, each of the four groups that had practice with feedback scored significantly higher on a posttest than the two groups that did not have practice with feedback.
The study itself doesn’t specifically call out Gagné quite as much as this summary implies, but it certainly should make us pause before we insist on following that formula exactly. It also is proof that active practice with feedback really does make a difference. If a client asks for an information dump, this research should help support you in arguing for something including practice.
At a previous job, we had regular quasi-formal professional development training for the instructional designers, provided by other members of the team. One person planned a simple game to reinforce Bloom’s taxonomy. The group was divided into two teams, and one person at a time from each team came up to the front and faced each other across a table. The “game show host” read a “Bloom verb” off an index card and the contestants slapped the table to see who could classify it first.
What would you guess happened? Think about a verb like “Determine”: where would you classify it?
The game almost immediately devolved into arguments over where the verbs belong. The poor activity leader had consulted a single list and didn’t even consider that different lists categorize verbs differently. Sometimes a single list classifies verbs in different places. This Bloom verb list, for example, classifies “identify” as both Knowledge and Comprehension; another list puts “compare” and “contrast” both in Analysis and Evaluation, depending on whether you use them together or separately.
How do you definitely solve an argument like that? Do you have research support for putting a verb in one category or another? Neither did Bloom. As far as I know, Bloom’s taxonomy was meant to be a theoretical framework and was not based on any sort of research. (If I’m wrong on this, please direct me to the research; I’d love to be corrected!)
This piece on Problems with Bloom’s Taxonomy asserts the lack of research:
The categories or “levels” of Bloom’s taxonomy (knowledge, comprehension, application, analysis, synthesis, and evaluation) are not supported by any research on learning. The only distinction that is supported by research is the distinction between declarative/conceptual knowledge (which enables recall, comprehension, or understanding) and procedural knowledge (which enables application or task performance).
That article outlines how the taxonomy is invalid, unreliable, and impractical, as well as offering alternatives more focused on performance.
I will admit, as I did in the interview, that I do still use “Bloom verbs” for writing objectives. I keep those verb lists handy because they help trigger ideas and focus on more active, higher level objectives. I’ve done objectives this way so long that it’s force of habit as much as anything else. I suspect I could get comparable or better results using the “Content by Performance” option in the article above. I’ve been in work environments that were really invested in Bloom, and I admit I’m not sure how much I’d fight that battle. If it’s a choice between fighting about Bloom or fighting to have realistic practice, I’ll choose to spend my efforts fighting for practice and context.
Gagné wasn’t emphasized as much in my education training, so I don’t have as much to unlearn there. The research above simply reinforces the need for practice with feedback. As the authors of the study point out, some of these elements may have value beyond improving posttest scores. Objectives can be useful in the development process. Personally, I want those objectives when I’m working with SMEs to help focus them on what’s most important, because that will improve the end result.
What about you? Do you use Gagné and Bloom, or have you rejected them in favor of something more relevant for your own work? Or am I way off base here, missing significant research that supports these ideas?
32 thoughts on “Questioning Gagné and Bloom’s Relevance”
Thanks Christy, actually I am interested in using the Solo taxonomy for assessment items inorder to provide formative feedback to students because the Gagne and Bloom are best for summative assessment, it is my opinion.
Thanks all for discussion on Gagne and Bloom but my question as a researcher is that : Is the Billom and Gane’s classification and learning designs still unexplored and are they related with Solo ‘s Taxonomy? Please comment
I wrote this over 10 years ago, and I haven’t seen any better research support for Gagne or Bloom in that time. If you read through the other comments here, you’ll see that sometimes people claim Bloom is based on research… but then never provide the evidence to support their claims. You can use them if they help you structure your learning design, but they aren’t based in research.
The SOLO taxonomy isn’t a popular one. It seems to be focused only on writing assessment questions, not on designing the learning experiences themselves.
I don’t remember the publications I uses as I Was a didactic on University. But I’m sure you can find internet sources which describe the processes Bloom used to develop his insights. He was chariman of a committee, which had to study the diverse methodes in schools, universities and exams. It was provoked by the feeling that the USA was loosing grounds after the Sputnik was launched.
He was the head of a committee with the APA, that’s true. That doesn’t make it research though, does it? The descriptions of the process the committee used are generally vague, but the few sources that are specific say that they collected a bunch of existing learning objectives and then classified them. See David Moore’s Reconsidering Bloom’s Taxonomy of Educational Objectives, Cognitive Domain, for example.
“The original goal of the group was to frame a theoretical system which could facilitate communication between educators.”
Note two points in the above quote. First, it’s a THEORETICAL system, not a heavily research-based one. Second, the point is to help educators talk to each other, not to improve learning outcomes.
“From here, the researchers proceeded by collecting a large number of educational objectives. Each of these written objectives was examined in turn to determine what intended behavior and what content was indicated. The intended behaviors were then grouped into divisions which the researchers felt were implicit in their nature. The result was a hierarchy of six levels…”
Moore does call the committee “researchers,” but I think we should be cautious about claiming that their resulting taxonomy is “supported by research.” Yes, they “researched” a bunch of existing objectives–probably hundreds of them. They looked at those objectives in isolation from lesson plans or educational results. They weren’t looking at tests, examinations, or other assessments–just the objectives, probably self-reported by educators. They classified the objectives based on their own personal perception, which makes for a nice theory and tool for discussion among educators.
Let’s do our learners a service and not pretend that this is about improving learner competence or performance though. There’s a lot of mythology around Bloom’s taxonomy. We should act like professionals and not perpetuate unsupported myths.
Blooms Taxonomy gave me many insights how to build a conceptual framework for my study books. I used to be inspired by Bloom also for exercises or tests.
To my knowledge, Bloom developed his taxonomy on hundreds (?) of observations on all levels from high schools to university. He studied tests, examinations and so on.
Bloom’s taxonomy can provide inspiration. I use it for that too, as I indicated. I think it’s problematic when it’s used with a really rigid approach.
Can you provide a citation that talks about Bloom’s process of observations that led him to the taxonomy? Everything I’ve seen has led me to believe it was just personal experience and not formal observation and documentation.
Reblogged this on Blogcollectief Onderzoek Onderwijs and commented:
This post by Christy Tucker, reblogged here, points out some of the problems I have also had with Bloom’s taxonomy and similar recipes (e.g., Gagné’s). We cannot blame good old Bloom for the abuse of his model, but it’s good to see someone emphasizing the lack of research support behind his theory. It’s fine, I think, to use it as a checklist for the design of a lesson plan, but any attempt to classify skills and learning activities as things of a ‘lower’ or ‘higher’ order are bound to fail, as Christy Tucker shows in her post.
Thank you for the reply. I did find one article that preached about using Gagne in a different setting (Khadjooi, Rostami, and Ishaq, 2011). The authors used it to teach junior physicians how to insert a peritoneal drain. Yet they did not give any results as to how the class did. The authors also noted that Gagne’s nine steps have been used in the military, which is where I teach.
I agree that they should be used as a guide, but I have a feeling that I will butt heads with those who think that they are hard and fast rules.
I have already chosen my dissertation topic, but testing his theory and providing actual results may be something for later publication. I find it hard to believe that his theory has been around this long and no one has done any actual testing.
Khadjooi, K., Rostami, K., & Ishaq, S. (2011). How to use Gagne’s model of instructional design in teaching psychomotor skills. Gastroenterology & Hepatology from Bed to Bench, 4(3), 116-119.
I just did a quick search on ERIC (well, the EBSCO version that’s currently open during the shutdown) and didn’t find anything really promising. There was one article in a Turkish journal that might be relevant to your work. When Congress gets its act together and opens the government again, it will be available here: http://www.eric.ed.gov/contentdelivery/servlet/ERICServlet?accno=EJ886449
Assessing the Effects of Using Gagne’s Events of Instructions in a Multimedia Student-Centred Environment: A Malaysian Experience. Neo, Tse-Kian ; Neo, Mai ; Teoh, Belinda Soo-Phing
You’re probably going to have to do the same thing that Clark did and try to find research on each individual element, regardless of whether the authors specifically mention Gagne or not.
Good luck with your research, and I hope that some day you do publish something that systematically tests the effectiveness of Gagne’s model. It’s crazy to think no one has done it yet.
Thanks again. Once I knock out my current dissertation, I think this could an area for further research. It is sad but I brought up this issue with another instructor who is certifying me to be an evaluator of other instructors. She would have no debate and showed no concern about the lack of research behind Gagne’s nine steps.
It is also a shame that there are PhD holders echelons above me who insist on these nine steps, and have not performed any research (or at least published anything for our consumption). If our job is to teach young soldiers who are to go and defend our nation, I want to be sure they understand the concepts they are taught. If the how is not fully addressed and the why behind the how is not established, then we are just walking around in the dark, hoping that the light bulb will turn on in the student’s head. Sorry to vent here.
It will be just under two years before I complete my current dissertation, so I guess it falls on me to follow up on this, since no one else will do it.
I hope you are still monitoring these posts. At any rate, the reference for the study about the one essential Gagne step is noted below (Martin, Klein, and Sullivan, 2006). It took me awhile to find it, and have not yet read it, but if the title has anything to do with it, would it suggest that it only applies to e-learning models?
I want to trash this hippie garbage as well, since many of my fellow instructors have already bought into it, and if it indeed is not supported by research, it needs to be abandoned. If it is supported by research, then I suppose I am left with no alternative but to buy into it.
The biggest problem for me is that as a doctoral student, we are chronically told to deal with research that is no more than five years old. Not only is Gagne well beyond that, but Chapter 10 of his book is nothing more than a literature review; it is not research. I am searching high and low for research to support it, but have found none to this date.
Martin, F., Klein, J., & Sullivan, H. (2006, October 26). The impact of instructional elements in computer-based instruction. British Journal of Educational Technology, 38(4), 623-636; doi: 10.1111/j.1467-8535.2006.00670.x
Yes, Andrew, I’m still here and monitoring the discussion. It’s always rewarding to see that people still find my older posts useful.
You are correct that the original research that led me to question Gagne was regarding online learning, so it can’t automatically be applied to on-ground training.
Donald Clark did a great summary of research on the elements of Gagne here (which he mentioned in a comment above): http://bdld.blogspot.com/2011/08/look-behind-robert-gagn-nine-steps-of.html
To quote Clark: “While some think the Nine Steps are iron clad rules, it has been noted at least since 1977 (Good, Brophy, p.200) that the nine steps are ‘general considerations to be taken into account when designing instruction. Although some steps might need to be rearranged (or might be unnecessary) for certain types of lessons, the general set of considerations provide a good checklist of key design steps.’”
There is support for some of the events, as Clark points out. Gaining attention is useful, and it’s something we often ignore in the “let’s start with a boring list of super-formal objectives” pattern of many courses. Telling learners what to expect from the course is good, but we don’t need to tell them that with our carefully crafted objectives. Giving learners a “What’s in it for me?” meets the standards of that event without boring them to tears before we even begin.
I already weighed in on Gagné and Bloom and took on the third leg of the “follow-the-Leader” school of Instructional Design … it’s informative about all three: http://bit.ly/oYboPY.
David, I did some hunting around on your blog, but I’m not sure which post you’re referring to. I only see one mention of Gagné and no mentions of Bloom even after doing a search. Could you please provide a direct link to the relevant post, rather than to a page of posts? That would help me find it. Thanks.
Christy, just saw this. I use Gagné to some extent, believing that it helps to engage them emotionally as well as cognitively in an intro, providing a model (concept) behind the why and how (cog psych research), show them contextualized examples (cf work by Sweller, also Shoenfeld), as well as practice. I think Bloom’s is too complex (as Sugrue suggests and you pointed to), working on a project right now for a simpler model ala Van Merriënboer with only two levels: the knowledge you need and the complex problems you apply it to solve (which actually has two different forms).
I need to spend some more time with Van Merriënboer’s work, as that does seem promising. I’m glad to hear you’re getting some value from Gagné. Certainly things like emotional engagement are worthwhile. I wonder if part of why you get value from Gagné is because you don’t use it as a cookie cutter approach though; I’m sure you use it as a guideline, not a dictated template. In some of this discussion above, the point was raised that none of these models should be followed unthinkingly. You keep your brain engaged while using it, so it works.
As a side note, I’m personally somewhat skeptical of Sweller. He seems to have demonstrated his willingness to “spin” his results to meet his agenda a bit too much for my tastes. For example, I’d prefer a doctor with better clinical results over one who does better on a written exam, but Kirschner, Sweller, and Clark discount clinical results when it doesn’t fit their narrative. Don Clark’s review of Kirschner, Sweller, & Clark is a balanced view, and the Hmelo-Silver, Duncan, and Chinn paper is a thorough response.
@Steven and Michael, I think you both have some good points. These models should be guides, not absolutes, and they shouldn’t make us forget our common sense. Learning is messy; no model or guide will be the perfect solution all of the time.
@Donald, This is great. I appreciate the time you put into systematically collecting and summarizing all these sources. Some of the Marzano research is referred to in the Martin study above. I think I need to dig into those original sources more.
I think the point of these theories on how to build instruction are just guides. For instance, using the Hunter Method the first two or three years of teaching is a good idea. But if you are still using it verbatim after your first years out, you need a different profession. These are models and guides, not to be used for all students/classrooms/teachers all of the time. It is quite appropriate, once you have some experience, to use what works for you and your students.
I thought I would bring some of the research behind the other steps, A Look Behind Robert Gagnè’s Nine Steps of Instruction
Thanks for posting this— I think some designers hide behind Bloom, Gagne and other “science” to justify bad design and boring content. They have a rubric that says they made a good course and that is what matters to them.
I’ve briefly covered some of my thoughts on Gagne and Bloom here:
Hi Christy – I, too, have “been in work environments that were really invested in Bloom” and I hate to think of what could have been done with the time that was spent hashing out verbs in meetings. It seems that anything that creates a quick, logical reference tool (like a taxonomy or nine steps) can be taken to heart and made a standard (policy, even) instead of a guideline. These approaches are convenient and help us get to where we’re going but not without a lot of other considerations related to context of the learning environment. Thanks for taking on this instructional design reality check issue.
I hadn’t made that connection before, but I suspect you’re right. It’s so easy to take something like this (or Maslow’s hierarchy, Dale’s Cone, ADDIE, etc.) and take them as absolutes, regardless of the specific situation. We get caught up in following a pattern without thinking too much about why we have that pattern as a guideline.
I was just rereading this great post (thanks again, Christy, for fielding my question!), and I have to say that I could not agree more with the point Melissa raised.
I imagine this–the “getting caught up following a pattern without thinking too much about *why* we have that pattern as a guideline”–is a trap that practitioners in any field have to be careful not to fall into.
“Rule of thumb” heuristics are indeed convenient and serve their purpose in our day-to-day work, but they are a double-edge sword. They are imperfect, and it takes conscious effort for an organization (or an individual) not to be a slave to them.
While there may be little research on Bloom’s verbs, that hasn’t stopped designers and educators using it for their own purposes. The digital Bloom’s (http://edorigami.wikispaces.com/Bloom%27s+Digital+Taxonomy) comes to mind or even Kathy Schrock’s Bloomin’ iPad (http://kathyschrock.net/ipadblooms/).
I agree with Cathy Moore about the “do:” learning is active and practice helps reinforce it. Using Bloom’s verbs as a guide sparks ideas that I might not have had otherwise.
As for Gagne, his model is facilitator-centered, relying on the facilitator to be the disseminator of knowledge. I doubt that it is a relevant model for today’s learning needs. What are your thoughts?
That’s a good point; if Gagne is relevant at all, it’s relevant for traditional facilitator-led training. So much learning happens outside that environment, that it isn’t as relevant for everything that the learning field now encompasses.
I agree that practice with feedback is the key, that’s where we focus 90% of our development efforts. We start with the exercises/tasks people do in class and make sure the students will have success completing those tasks. Everything else in class supports the tasks, if not we drop it.
That seems like a very practical approach. Thanks for sharing!