63 Comments
User's avatar
Dani's avatar
Jun 1Edited

Thanks for working out loud here. I have a question that you might want to consider - here goes the context: Although it's common to find the cognitive domain represented as a pyramid, it's important to notice that this pyramid does not appear in either the original or the revised taxonomy, as it can suggest a linear hierarchy in which one level is a prerequisite for the next. In fact, the 1956 text does mention a hierarchy, but this was adjusted in the 2001 revision. In the revised version, it is acknowledged that the three intermediate levels — Understand, Apply, and Analyze — may sometimes form a cumulative hierarchy, but not always. The order presented (from simplest to most complex) refers to overall complexity, not to mandatory steps. In other words, the lower levels tend to require less interaction and complexity, while the higher levels demand more articulation and cognitive sophistication.

Here is my genuine question: How inverting the taxonomy will help us redefine what create means now - when one can just press enter and have something created for you?

Expand full comment
Michelle Kassorla's avatar

That’s the issue—the press and play. That is the lowest level, the level that requires no cognitive input by the student (or almost none). That’s why we need to introduce friction, push the students to analyze, evaluate, understand, that work. In our classes we do this with a lot of metacognition and AI tutoring.

Expand full comment
Howard Aldrich's avatar

So, this pushing & monitoring takes place in the classroom. How many students are you supervising? Can you entrust some of the pushing to their peers? I can see how this could work, but I worry that it very much depends on how committed students are to helping their colleagues.

Expand full comment
Michelle Kassorla's avatar

Actually, these levels require agency by the student. The process by which this occurs is beyond the scope of this think-piece, but in my class every assignment is scaffolded bit by bit using a combination of the students’ own writings, AI tutorials, research sessions, peer response, drafts, and a final product. All assignments are worth exactly the same amount—10 percent—so that they are rewarded for process, not product. Every assignment that is turned in must be accompanied by a transparency statement about how AI is used and at least ten reflective footnotes about their process.

Expand full comment
Whitney Whealdon's avatar

Thank you for sharing this level of detail around the how! The application feels more important than the theory here. And much writing research speaks to the value of process over product. So with regard to scale, if we’re teaching writing rather than assigning writing it seems like engaging in a process like this isn’t optional when it comes to working with AI.

Expand full comment
Howard Aldrich's avatar

Your reply leaves one answered question & that question is amplified by the amount of work involved in this approach, as described in your comment. How difficult is it to scale this strategy?

Expand full comment
Chris Despopoulos's avatar

I love the notion that students can now create without understanding. Or... Is it really creativity at all? This notion opens a wonderful can of worms. William Burroughs pioneered cutups as a way to create, then generate a different understanding. I think he felt that an author's understanding got in the way, and was necessarily inferior to the understanding that was intrinsic to the text... And cutups were a way to sort of liberate that intrinsic value. It can work with art for art's sake, anyway.

But understanding is the goal of education, isn't it? And so you examine students to see if they can express their understanding. Writing is a process of discovery, and you can't fully know what the end result will be until you write it. (Edgar Allen Poe would have scoffed at that...) The writer is discovering understanding, even if the writing begins from a point of understanding. People argue that LLMs just speed that discovery along. Where is the threshold at which prior understanding is/isn't sufficient to qualify for YOU being the writer? Or... has there ever been a YOU as writer?

Expand full comment
Michelle Kassorla's avatar

Absolutely. Actually, I just had a very interesting conversation with my DIL who has her Master’s in early childhood—when she heard what I was working on, she said, “That’s play-based learning. Check out Friedrich Froebel, who theorized that children play to create, and then, as the play becomes more and more sophisticated, they gain conceptual knowledge as they play. The problem with this system is that it is very loose and doesn't play to the ideas of curriculum and outcomes, as Bloom's does. I'm thinking, now, that I might find a hybrid to discuss the creative process with AI which, absolutely, must start with play.

Expand full comment
Dr. Marie Vasquez-Brooks's avatar

Play based learning is truly powerful, but works best when also combined with understanding of development (child and adult) and leaning into unique learning spaces. Unique learning communities. Learning is a social construct but most importantly for little humans. As researchers and educators, we really need to explore and understand the indicators where play-based learning, experience with curiosity and creativity, have built a strong enough foundation for learners. Especially for young learners who are growing up in front of screens with 'digital first' experiences.

Expand full comment
Michelle Kassorla's avatar

But there is something in Play-based learning that really speaks to this, but I can’t really put my finger on it yet. More thinking is required, I’m sure!

Expand full comment
Rée the Interdisciplinarian's avatar

What stands out to me is the part where you talk about student agency. I think play-based learning is much more agency-giving.

Expand full comment
Todd's avatar

This notion of create first is also a common process in software development. The idea of using a Build -> Measure -> Learn process to create "new" things has been successful for lots of companies. The idea that students can learn that way as well make sense.

Expand full comment
Rée the Interdisciplinarian's avatar

I'm a visual thinker and I think bottom -> up. I also have a poor working memory with executive dysfunction, so I never learned the way I was "supposed to" learn. So, thank goodness for art and English classes where I could create first. I needed to play and experiment (also read as: I needed to do things the hard way before I learned anything) which has always led me to believe that understanding only comes after creation. So what you say about writing being the process of discovery (I have a pencil on my desk with a quote by Flannery O'Connor "I write to discover what I know") is really key. And so, I do think we have to invert our systems so that we expect our students to create first (rather than "learn" first) which will hopefully actually incentivize student agency, and not extrinsic rewards.

Expand full comment
Nick Potkalitsky's avatar

Love the quote. Can relate to your experiences. Poor working memory for sure. Always feel like I am playing catch up!

Expand full comment
Nick Potkalitsky's avatar

It is about time we inverted this thing!!!!

Expand full comment
Michelle Kassorla's avatar

I had never thought about it until I had that conversation with Eugenia about how students create first now, and how Bloom's was broken. It goes to show that our understanding often leaps from obvious things that we should have always realized but never considered. I often look at great ideas and think, "Wow! That was sitting around here being obvious all that time, but no one said it."

Expand full comment
Nick Potkalitsky's avatar

Very elegant. Form meeting function for sure. I have always been a little Bloom adjacent. The eerie linearity never quite gelled with real learning pathways- more zigzagged. But AI imposes a rigidity on what is essentially flux. At least, in its initial encounters. But I sense this has a lot to do with interface rather than something intrinsically AI. The creating category for instance is unstable. Created text can be part of an ongoing inquiry focused on assistance rather than pure content creation.

Expand full comment
Rée the Interdisciplinarian's avatar

I like that: "AI imposes a rigidity on what is essentially flux." But I think it's because we are using AI in a system that's based on the un-inverted Bloom's. Because, if the system led with creation, we would use AI to support ongoing inquiry, and not just to claim the extrinsic rewards.

Expand full comment
Whitney Whealdon's avatar

I love “Bloom’s adjacent”! Same. Thinking skills are attached to cognitive knowledge structures (mental models, schemas) and so the idea that we can parse them out or force fit them to a process for understanding ideas from different knowledge domains has always felt off to me.

Expand full comment
Nigel Daly's avatar

I don't know if others have said this ... but here is my 2 cents worth (ok, maybe 2 dollars’ worth). It is critical, but I really appreciate your inverted Bloomian take to push me to think about how Bloom's taxonomy fits or doesn't fit into today's AI-infused education system.

I have been slightly troubled about Bloom’s taxonomy as it relates to AI use, thanks (?!) to Made Hery Santosa at a recent AI conference at MCU in Taipei. So, there’s a lot I appreciate about the Inverted Bloom’s Taxonomy and how it tries to rethink it in terms of student agency and how students are actually using AI.

It starts with the observation that for many students, CREATION (with AI) now comes first while UNDERSTANDING and REMEBERING come later.

But do they? And if so, what is understood or remembered?

This is where the diagram confuses me. It feels like it’s trying to justify (salvage?) the wrong way to do an assignment with AI. It starts by getting AI to CREATE first, which really shouldn’t be justifiable (also, there are many degrees of agency possible at this stage). Then the student tries to make the best of this mistake by using AI again to EVALUATE and ANALYZE its own product—because the student is unlikely to be able to do so on their own. Next, based on the AI’s feedback, the student APPLIES back to the original AI version to improve it. This is supposed to show increased student agency and is assumed to lead to greater UNDERSTANDING of the material. But to actually show greater agency here, students would need to demonstrate their understanding in some visible and productive way—like a metacognitive learning reflection (but wouldn't that then become a lower/higher (?) act of (Bloomian) EVALUATION in itself. And only if UNDERSTANDING is clearly demonstrated could we say that it might be pushed into a REMEMBERING stage; but only if there are opportunities for forced recall or retrieval, like tests or in-class reflection writing. And if it is assumed they will remember this for the next assignment, it seems to assume too much, unless it’s exactly the same. And if it is the AI workflow (metacognitive) principles that are being REMEMBERED, then what about the content knowledge from the assignment?

I appreciate the idea of the inverted Bloomian taxonomy. It gives us a chance to look at learning and agency from a different view. But it seems to assume a lot and goes from being descriptive (using AI at the CREATIVE stage) to normative (how to salvage a bad start).

It might be more productive to create a new model for learner agency that starts with a more normative starting point—and ideal to reach for.

Expand full comment
Michelle Kassorla's avatar

This inverted Bloom’s, like the original, is not a map for how to teach. It is a description of what is happening when our students create first, then understand and remember later (which most of them are doing, by the way, even if you think they are not). It may provide some understanding of how we can engage our students to use more agency in the classroom, but that agency is coming at the start usually—it is coming from their reflection and revision process.

Expand full comment
Chris Meyer's avatar

Thank you for thinking out loud! This is an important conversation to start. AI is here, what we do with it as educators has potential to be game changing in how we educate our students.

Expand full comment
Lu Nelson's avatar

Very interesting analysis! I hadn't encountered this hierarchy before. It seems to me to demonstrate why artifacts (essays, etc.) in the AI era are now the least reliable indicators for evaluation, and only direct presentation/q&a/defense could now give a real sense of a student's comprehension

Expand full comment
Dr. Marie Vasquez-Brooks's avatar

Thank you for sharing this line of reasoning! I have loved sharing with new faculty the structure of Bloom (and Anderson) for reflecting on learning environment structure when teaching adults concrete, skill-based disciplines. But the other half of my educational "life" is with small humans who really need to be embedded in nature and concrete experiences to develop their understanding of the world and their agency in that world. I would love to see this age of technological innovation also spark deeper research and new architecture and elevation of early childhood education. II think this clearly shows how important our learners come into more structured learning environments with stronger social-emotional skills and love for creativity and deep curiosity.

Expand full comment
Jennifer W Shewmaker's avatar

Thank you for this! These are the conversations we must be having in education.

Expand full comment
Michelle Kassorla's avatar

Yes. These are the conversations I am hungry to have. Everyone needs to calm down and realize that AI doesn't mean that teaching is done for. The teaching we are doing now has been gutted by the fire of AI, but we don't need to abandon it. Pedagogy must be repossessed and remodeled.

Expand full comment
Jennifer W Shewmaker's avatar

Yes!! In our Center for Teaching and Learning we’ve been talking about how we need to come back to the basic idea of what do I want my students to know/be able to do and how will I know if they’re there? Then rethink how we get there and how I can know. AI just makes us have to consider some new strategies but there are also some very old ones (Socratic method) that we can bring back into our regular teaching.

Expand full comment
Michelle Kassorla's avatar

Not only the Socratic method (which we have used very effectively to tutor students in writing), but all of those wonderful writing theories by Peter Elbow, Gerald Graff, Linda Flower and John Hayes, Mina Shoughnessy, Donald Murray, and James Britton. We also need to seriously invest in multimodal writing and revisit the work of the New London Group and works by Gunther Kress, Cynthia Selfe and Pamela Takayoshi. Pedagogy needs to be vogue again, and we need to repossess and remodel the house that these theorists built.

Expand full comment
Jennifer W Shewmaker's avatar

“Pedagogy needs to be vogue again, and we need to repossess and remodel the house that these theorists built.” YES!! All of this.

Expand full comment
Maurice Blessing's avatar

Interesting! But I would argue that this sequence is exactly what most students follow in reality when learning: to create something on the fly without having or using prior basic knowledge and then working back, based on critical teacher feedback, to some form of understanding / knowing what they have created and what factual or procedural knowledge might be useful for creating something better in the future. This is not necessarily a bad thing, since I believe most ‘natural’ knowledge construction starts with creating something based on not knowing and intuitive experimentation. The application of AI in the process won’t change the core principles of this process I think, maybe just giving the student more agency on the critical feedback part. The crucial distinction will remain the one between the interested and the disinterested student. So if the application of AI can light the fire of some disinterested students, that will be a disctinctive win for education.

Expand full comment
Michelle Kassorla's avatar

What you are saying is a game-changer, Maurice! Yes. We need to recognize this as a natural part of the process--and that we have often created it first. I don't know about you, but I have never, in humanities research, done the research first and then written a piece. I have always created the writing first, then gone back and expanded and revised, and my understanding grew--adding evidence and examples and becoming more fluent in that idea. Writing, at least for me (and I suspect others), has always been a process of discovery--not a preplanned and formal distillation of research.

Expand full comment
Maurice Blessing's avatar

That’s almost exactly how i used to write historical articles. During writing questions kept popping up making me return to the sources and making new connections. It made both the process and the end result that much more interesting in my opinion.

Expand full comment
Susan's avatar

Michelle, thank you for “thinking out loud” about how generative-AI tools scramble the comfortable architecture of Bloom’s. It was thought provoking, but something didn't quite sit right with me. After thinking about it for a while, I was able to refine some of the angst I've been having about what it is that co-regulation with AI does (shameless plug: a colleague of mine, Ryan Roderick, and I have a piece hopefully coming out soon about this co-regulation with GenAI, which we're calling "interplay.") Your inversion helped me locate the source of my own discomfort with recent calls to “flip” or “collapse” the taxonomy: it treats text as the creative product rather than treating writing—the recursive, rhetorical activity that turns half-formed sense into meaning—as the locus of creativity.

I come from Rhet/Comp and so I know I see this a little differently than my friends from Lit. From a rhetoric-and-composition perspective, I would consider the document to be a trace of invention, not the invention itself. When I ask ChatGPT to draft an essay on Jim in Huckleberry Finn and hand in the unedited output, I have not created; I have subcontracted. Sort of like when my students used to hire someone to write the paper. By contrast, when I have already read, compared, judged, and named themes I want to explore, ChatGPT can help me language it out. In this way, an LLM can operate like a gifted peer-reviewer or line-editor, helping me phrase an insight or notice a gap I had missed. The ideas remain mine; the prose is co-composed. In that sense, the moment of genuine creation still follows (and depends on) analysis and evaluation, just as Linda Flower and John Hayes remind us that “the act of composing is epistemic” because writers make knowledge while transforming it into language.

This leads me to wonder whether the problem is less the order of Bloom’s categories than the need to add a second dimension that tracks agency. We might picture a spiral: early passes through remember → understand → apply can be scaffolded by AI-generated summaries or exemplars, but with each revolution the student assumes more responsibility. At the outer rings—high-agency analysis, evaluation, and creation—AI contributes primarily as a reflective surface against which the writer tests and refines prior knowledge rather than as an originator of content. Such a model preserves Bloom’s insight that sophisticated thinking rests on earlier cognitive moves while acknowledging, as distributed-cognition research does, that tools and collaborators can offload some lower-level labor without eliminating the higher-level work.

Practically, this framing might help me draft assignment prompts. (I have my students document their GenAI use through a process log).  But this framework gives me a little more food for thought. If the goal is for students to create with genuine agency, I might require a short process memo: What did you know before prompting the model? What decisions did you make after reading the output? Which revisions are demonstrably your own? To surface the very “productive friction” you and Eugenia describe, making clear that the slow work of meaning-making still matters—even when an AI lets us press Enter and watch polished sentences appear.

I look forward to seeing how your inverted taxonomy evolves!

Expand full comment
Michelle Kassorla's avatar

Susan, This is extremely interesting, and you have been so helpful in making me think through my own creative process in inverting Blooms. Your questions, your deep framing of this issue, and your suggestions are beautifully wrought. I admit, I will have to take some time to think about what you have written here, as I believe it is significant. I may respond in a week or so after I have thought this through (I am not as efficient as AI in doing this! Haha). Meanwhile, I think I understand what you are saying about the swirling idea of this--I have conceptualized it as eddys in a stream of knowledge, circling round before again joining the movement forward. I love your concept too, though. I tried to create it with ChatGPT. Tell me if I am far off . . . https://docs.google.com/presentation/d/1ttFjwRS19vM6zvzJioktyeqcpYidffDFmBh1_ZenZ_U/edit?usp=sharing

Expand full comment
Sally's avatar

You have a fascinating line of thinking. If I were to redo Bloom's Taxonomy, I'd actually show "Remember" at the center (like this: https://sora.chatgpt.com/g/gen_01jx5jneskejd8qccw0vwpm3mj). With or without AI, what you remember is the starting point and ending point of all knowledge work. Without AI, you come to any knowledge task in the taxonomy with what you remember on your own, you perform the task be it to understand, apply, analyze, evaluate, or create, and you come away from the task with a better memory. With AI, you're just able to come to each knowledge task with more information. My hypothesis, based on working with AI since ChatGPT was released, is that the exit point of a knowledge task assisted by AI is more enriching to the knowledge worker than the exit point of a knowledge task unassisted by AI. AI just completely supercharges knowledge work.

Expand full comment
Michelle Kassorla's avatar

Again, this is so fascinating. I really like your image, and the idea of it. I think you are absolutely correct, we all come to the task with Remember and we end the task with Remember. "Remember" seems to be human agency, the capacity to learn and think. Hmmm.

Expand full comment
kiron's avatar

Dr Kassorla, have you published this anywhere so that I can cite it?

Expand full comment
Michelle Kassorla's avatar

Only here, so far, Kiron. I'll let you know if we get something done! I wish they would apply AI to publishing--just to speed it up a bit.

Expand full comment
Bonnie Kraxberger's avatar

My first thought is that “create” does not equate with “produce”. Plugging in an assignment framework and producing a matching product (the essay in your example) isn’t creative, it’s following a protocol.

While AI can’t remember facts for you, it can tell you what facts are relevant. It seems like you’re reworking information-based connections, not solely inverting Bloom’s Taxonomy.

Expand full comment
Michelle Kassorla's avatar

There are really two versions of creation: Creation ex nihilo (Creation out of nothing) and Creation ex materia (Creation out of existing matter). I don't think, with the exception of G-d, there is creation ex nihilo, especially in the realm of humanity. So, let's consider that for a moment. When I create, I actually take something that already exists (words, numbers, clay, paint, textiles, etc.) and I combine those things in a novel way to make something that has never existed before either as an idea or as a tactile, physical reality.

For example, a novel turn of phrase may never have existed before in the realm of ideas or words, or a scarf I knitted may have never existed before in the physical world. Or maybe I take something that already existed (Blooms Taxonomy) and I flip it on its head and think of it in a new way -- which is also a creative act ex materia.

Just as I can do those things, so can a machine, but it must be directed by a human in order to produce the creative output. For example, I might type or speak the novel turn of phrase to share it, but the turn of phrase is based on words, images, metaphors that already exist in my experience, or I might create the scarf on a knitting machine from a pattern with a different kind of thread or a different color of wool, or I might prompt AI to make an image from an idea that I have (usually based on other things I have seen or known).

So, I am creating, despite the fact that the machine is involved. Sometimes I am even creating at scale. There is no requirement of creativity that it be original or novel in all aspects, as it could not be. Creation ex nihilo is really impossible for humanity. Everything I make and everything I conceive is based on things that already exist or knowledge, talent, or agency that I already possess.

Now we get to the sticky part: is my creation without AI any different than my creation with AI? You may say that AI doesn't require the level of talent or effort that creation without AI does, but we aren't speaking about skill here--we are talking about the creative act. If I were capable of doing magic, would you say that a painting I created with magic was less significant than a painting I had made by hand? You might--because you would say that the painting I made by hand required skill and artistry that I didn't need to have in creating a painting by magic. But what would happen, then, if all people became magic? What would happen if great artists could make all their paintings by magic--artists who possessed the skill and talent that you so greatly treasured because they were capable of doing that by hand? Would their magic art be less important or less treasured?

So what is the difference? The difference is the idea of the skill set required to create the work. That skill set is what you value as "creation," but if that skill set is no longer required to create, then what do we do? How do we understand what is being created and how do we build a skill set that pushes the creators of magical art (AI) up to the level of those who have the knowledge and skill to make things by hand, but choose to make the art with magic?

Now you have an idea of why this inverted Bloom's Taxonomy is important. When you strip skill and knowledge from creation and the creative act, you need to reintroduce knowledge and skill at a later time so that students do not stop at creation and work to build skill and knowledge. It addresses the question: If one creates first, what next?

Expand full comment
Keisha Lewis's avatar

I find myself struggling with the notion of CREATE being used to define prompting AI to create something for a student.

To me, creating something involves pulling together patterns, ideas, concepts etc. to make something 'new'.

Handing that process over to the AI does not strike me as creation in the original sense.

I'm not sure, though, what term could be used there instead.

Expand full comment
Janet Salmons PhD's avatar

Here is my take! Benjamin Bloom worked to make education active, to build students' critical and thinking - cultivating their own intelligence, not hallucinations regurgitated from others' stolen words. Let's use his work respectfully.

Start with "remember." Remember that the tech titans behind Gen AI stand with authoritarians and are silent when books are banned and curricula censored, students deported.

Understand that they are fine taking copyright protected work without permission or compensation.

See how these companies apply scarce environmental resources to power data centers.

Analyze and evaluate real, credible sources to gain evidence and scientific findings. Question why these authoritarian-loving tech titans want you to surrender your own ability to think, read, and write.

See that AI "creations" build on art and writing stolen from others, without editorial or critical analysis of its credibility. Create something original, based on your unique knowledge, cultural background, experience, insights, and perspectives.

Expand full comment