Glows and Grows

It’s nearing the end of the semester and I’m knee-deep in grading papers and projects. I’m also preparing for a faculty learning community (FLC) that I’m leading on the book Spark of Learning by Sarah Rose Cavanagh (2016). I know I’ve mentioned the book a bunch of times over the last year or so on this blog but I’m rereading it again in preparation for our FLC meeting later this week. It’s funny how different things about a text resonate upon rereading. Since I’m so focused on grading right now, a section on feedback really stood out to me.

Cavanagh discusses two types of feedback that are important to enhance student competence: progress feedback and discrepancy feedback. Progress feedback involves “giving feedback to students about what they’ve done right, particularly if it is a skill that they were previously lacking” (p. 132). Discrepancy feedback involves “providing information to students about what they’ve done wrong and areas performance that are lacking” (p. 131). To keep students engaged and motivated, Cavanagh suggests using both progress and discrepancy feedback when assessing student work. Surprisingly, however, educators tend to focus more on discrepancy feedback. Cavanagh cites work by Voerman, Korthagen, Meijer and Simons (2014) that studied seventy-eight secondary teachers and found that only 6.4% provided progress feedback when assessing student work. Cavanagh argues that by providing the balance between progress and discrepancy feedback will support students’ feeling of competency and the overall emotional tone of the classroom.

After reading this section, I thought about a system that I use when assessing students work. I wish I could take credit for developing it but it’s one of those processes that one acquires from working with so many smart and creative colleagues. It’s called Glows and Grows. For many assignments, I’ll focus my attention on what the student has done well (the Glows) and the areas of which student still needs to work (Grows). Since it’s so simple to understand and implement, it can be used with a variety of assignments. I’ve used it with student presentations, performances and papers. The strategy is also really easy to use with peer-assessments when paired with explicit assignment expectations. By focusing on just the glows and grows, students can provide informal feedback to their peers without worrying scoring rubrics or letter grades.

Returning to Cavanagh’s discussion of progress and discrepancy feedback, it’s clear that a strategy like Glows and Grows provides a more balanced approach to providing feedback. While it’s a simplistic strategy, Glows and Grows offers students a clear picture of what they’ve done right while still identifying areas that they need to improve. I have to admit that I shared this strategy with a colleague yesterday and was playfully admonished for the way that “education people” talk. Sure, the rhyming and alliteration in the Glows and Grows name makes it seem elementary, but that’s part of its charm (from my perspective). The simplistic title makes it more accessible to students and helps them let their guard down and be more open and responsive to feedback.


Cavanagh, S. R. (2016). The Spark of Learning: Energizing the College Classroom with the Science of Emotion. West Virginia University Press.

Voerman, L., Korthagen, F. A., Meijer, P. C., & Simons, R. J. (2014). Feedback revisited: Adding perspectives based on positive psychology. Implications for theory and classroom practice. Teaching and Teacher Education, 43, 91-98.



Feedback and the Dunning-Kruger effect

I’m a podcast junkie. Since I spend over an hour commuting to and from campus each day, I choose to use that time to listen to smart people teach me about cool stuff. In a recent This American Life episode titled In Defense of Ignorance, I learned about the Dunning-Kruger effect and its powerful impact on learning. While I’m not going to necessarily “defend ignorance” here, I am going to discuss how our students’ novice can impact their metacognitive abilities and how important it is to provide strong feedback for improvement.

The Dunning-Kruger effect was first introduced in a 1999 study published in the Journal of Personality and Social Psychology.  The researchers (Justin Kruger and David Dunning) performed four studies to examine students’ abilities to self-evaluate their performance on different assessments.  After taking a test on logical reasoning, grammar or humor, participants were asked to assess their overall test score and to rate their performance against those of their peers.  Across the study, students who performed in the bottom quartile of the survey group consistently perceived their test score and performance relative to their peers as far greater than they actually performed.  As the authors write, “participants in general overestimated their ability with those in the bottom quartile demonstrating the greatest miscalibration” (p. 1125).

To some, the presence of the Dunning-Kruger effect may be surprising or eye opening. For those of us who have been teaching for a while, however, we can probably recognize this phenomenon in practice.  We’ve all encountered students who thought they’ve done really well on exam before confronting the stark reality of a low grade being handed to them. Charles Darwin captures it best in The Descent of Man when he writes, “ignorance more frequently begets confidence than does knowledge.”  Students don’t always know what they don’t know.

That’s why using formative assessments and providing feedback is so important.  In the Kruger and Dunning’s study, they discuss that the negative feedback from grades as offering little support for participants’ growth. Kruger and Dunning write, “Although our analysis suggests that incompetent individuals are unable to spot their poor performances themselves, one would have thought negative feedback would have been inevitable at some point in their academic career” (p. 1131).  But that’s not how teaching and learning should work.  As educators, we need to help our students develop the metacognitive abilities to self-assess their knowledge base and performance. We have to help students better recognize their areas of strength and weakness and provide feedback to close the gaps in their performance.  As novices in our content area, they will not have the ability to readily identify what they know from what they don’t know.  By offering ongoing formative assessment, however, we can provide those developmental markers that can help guide students and have them better overcome the gaps in their learning.  While the Dunning and Kruger article identifies individuals as “ignorant” or “incompetent,” I’d prefer to view them as “learners” and provide the necessary feedback and supports to help them be successful in my classroom.



The Future of Assessment

I’ve been thinking a lot about assessment lately.  On campus, departments have been preparing their annual assessment reports that demonstrate how student learning outcomes are being assessed programmatically.  I’m also helping to plan an assessment workshop for colleagues to broaden the strategies they use to assess their students’ learning.  Across my different roles and activities, it seems a little like I’ve landed in “Assessment Land.”

Assessment Land isn’t a horrible place. In fact, assessment is a really critical aspect of what we do as educators.  We need to successfully assess student learning so we can provide feedback that leads to improvement.  Our assessments are also important because they can help communicate to outside accrediting bodies that our students have developed the competencies required for their desired fields.  During these assessment discussions, however, I’ve been wondering what the future of assessment is going to look like. While there will undoubtedly be a shift from traditional paper and pencil measures, with what will they be replaced? It’s easy to say that future assessments will involve technology in some way.  But I worry about what that will look like.  For instance, my ten year-old son came home last week and complained about a new assessment system his elementary school was using.  After doing a little research, I found that the system involves answering multiple-choice “diagnostic” questions that would help to inform how his teacher would plan individualized instruction.  While individualized instruction is a respectable goal, when I spoke with his teacher recently, she said sometimes “it feels like we’re assessing more than we’re teaching.” If that’s the future of assessment, there are difficult days ahead.

Thankfully, there are other voices that are helping to offer other visions of the future.  Take the 2016 National Education Technology Plan (NETP) developed by the US Office of Educational Technology.   The plan outlines some characteristics of “next generation assessments.”  Next gen assessments would leverage technology to enable more “flexibility, responsiveness and contextualization.”  Instead of occurring after learning, next gen assessments would be embedded throughout the learning process and offer feedback in real time.  The assessment would also be designed universally so that all students could participate on an equal footing.  Rather than simple multiple choice questions, next gen assessments could leverage video and audio tools to tap into more complex means of demonstrating learning.  Lastly, next gen assessments will be adaptive and respond and evolve depending on students’ knowledge base and learning needs.

While the NETP offers a great vision for the future of assessments, I’d like to share another voice.  Recently I reread Karl Kapp’s 2012 book The Gamification of Learning Instruction.  While I’ve read the book several times, I find that different parts resonate with me each time.  This reading, his section on the game element of feedback stood out.  Since assessment and feedback are so closely link pedagogically, I kept envisioning how his view of feedback could inform future assessment design.  In the book, Kapp discusses game designer Robin Hunicke’s construct of “juicy feedback.” I honestly love the term “juicy feedback” but I love the characteristics that juicy feedback involves even more.  Twisting this a bit, I offer “juicy assessment” as a possible future.  Like juicy feedback, juicy assessment would be a sensory experience that coherently captures the outcomes and objectives it’s intended to assess.  It would be a continuous process that emerged from students’ work and involve provide balanced feedback that was actionable.  Most importantly, juicy assessment would be inviting and fresh, offering means and metrics that motivated and engaged.

While we’re presented with glimpses of the future of assessment, the visions couldn’t be more different. One sees technology as a means of efficiently measuring large numbers of students in an almost industrial way.  The other leverages technology to expand when and how we assess individual students, tailoring strategies to students’ needs and broadening what counts as evidence of student learning.  I honestly don’t know which future will come to fruition but I’m hopeful that Assessment Land will continue to be a place that I enjoy visiting.


Kapp, K. M. (2012). The gamification of learning and instruction: game-based methods and strategies for training and education. San Francisco: John Wiley & Sons.

Office of Educational Technology. (2016). National Education Technology Plan – Future Ready Learning: Reimagining the Role of Technology in Education. Washington, D.C: U.S. Department of Education.

Incompatible Beliefs?

This weekend, I read a lengthy news report that examined the performance on state assessments for several local schools. The report included all sorts of measurable aspects – teacher salaries, per pupil expenditures, SAT scores and so much more. In each area, the report ranked the schools according to each factor so that readers can easily see which schools were at the top (or bottom) of each category.

Looking across the report, some things stood out. For instance, the schools that had the greatest percentage of ESL (English as a Second Language) and Special Education students had some of the lowest overall scores on state assessments.  Not surprisingly, these schools were also some lowest funded schools and had the highest rates of students living in poverty. While I recognize that there are many factors that influence assessment data, the report points out some important incompatible beliefs that most educational leaders hold.

If you would survey school leaders and state officials, most would espouse the importance of differentiation in education. Considering the diversity of students who enter classrooms, they argue, educators can’t teach every student the same way and expect the same results. Students come to schools with a variety of needs and abilities and it’s important for educators to differentiate their instructional techniques to support their students’ learning. Consider this scenario. It’s quite possible that a teacher could be working with a blind student, an ESL student and a student with a learning disability in the same classroom. Obviously, teaching the content in the same way to all three of these students wouldn’t yield similar results.  Although it can challenging to vary instructional approaches with the diversity of students present in schools, few would argue with the basic need to differentiate instruction.

Alongside the widespread support for differentiation, there is another fundamental belief that most educational leaders hold – the need for academic standards. Decades ago, local schools adopted their own curricula and course of study. This approach, however, created inconsistency in expectations across schools. Leaders argued that it was difficult to identify which schools were successfully educating their students and which ones were not. Now, states provide detailed standards for almost every content area. These standards outline the academic targets towards which students and teachers should be working. Standards, leaders argue, provide guidance for teachers and help to create consistent and rigorous expectations across schools.

But that’s where the challenge lies. While educational leaders espouse the importance of differentiation and standardization, they don’t see the underlying incompatibility with these beliefs. The incompatibility, however, doesn’t originate in standardization and differentiation philosophically. Instead, the issue stems from standardizing assessment. Clearly, it’s possible to have similar expectations for students as long as educators teach them in different ways. The challenge is that we can’t turn around and then say that every student has to demonstrate what he or she has learned in the same way. Let’s return to the comparative data that was shared in the education report I referenced earlier in this post. In our state, every ESL student must take standardized assessments in English. No matter how much teachers differentiate their instruction to help these students meet the academic standards, if they can’t demonstrate what they’ve learned in English, they will fail. If we widen the lens to include the diversity of students taking standardized tests, the data shared in these educational reports becomes a little clearer. As educators, we need to negotiate these beliefs of standardization and differentiation to make better decisions on behalf the students with whom we work.

Penn & Teller teach assessment

Last week, I attended a conference in Las Vegas.  I’m not the biggest fan of the “Sin City” and I usually avoid it as a conference location.  This conference, however, was one of the premier events for faculty and administrators involved in teacher education and I really wanted to attend.  So, I overcame my apprehensions and biases and traveled to the land of neon.  I was fortunate in that I had several colleagues who also braved the trip.  Their presence definitely made the experience more enjoyable.

On our second night in Vegas, one of my colleagues wanted to see a show.  While I’m not really into Brittany Spears, Cher, Crique du Soleil or many of the other shows along the strip, one option really stood out:  Penn & Teller.  In my eyes, the magicians are the quintessential performers.  Not only do they astound the audience with feats of amazement but they also let them in on the act.  In their show, they tell the audience how certain stunts are conducted and how “charlatans” and “fakes” fool people with slight of hand.  Their indifference to the larger magician community is really refreshing.

The one trick that amazed me the most was a stunt the pair performed with boxes of joke books.  They sent ushers into the audience with joke books and asked two attendees to each grab a book out of the box.  The books were then passed around the audience randomly until Penn called for whoever was currently holding the book to open the book and select a joke to silently read to themselves.  Penn then “psychically” determined the joke that each of the audience members had selected.  It was pretty impressive.  While the audience was in awe from the performance, Penn explained that the secret to each of the tricks could be found online by searching for “cold reading” and “hot reading.” Which is exactly what I did.  Here’s what I found.

Hucksters who present themselves as mystics, psychics, mediums and mentalists employ both of the “reading” techniques.  Put simply, cold reading involves the “reader” analyzing visible characteristics of a person and asking leading questions that can inform their practice, whether it involves communicating with the dead, foretelling the future or guessing a selected joke in a joke book.  As the reader asks questions, he watches for slight changes in expressions and body language that guides the next question he asks.  By asking the right questions and closely monitoring the person, the reader can hone in on important data that can be used.

Hot reading, on the other hand, involves the reader collecting some background information about a person, usually subversively, and using that information to guide their actions.  Searching online, I found stories where audience members were asked to complete short surveys prior to a psychic show that were later used to guide the psychic’s “predictions.”  One medium reportedly recorded attendees’ conversations prior to the show and placed surrogates in the audience to eavesdrop on nearby people.  When the performer claimed to have heard from a person’s dead aunt, for instance, they already knew much of the information they needed to guide the ruse.

While both of the readings are simple to explain, seeing them play out in crowded theater is still really amazing.  On my flight back from the conference, I kept thinking about cold and hot reading and how the techniques could be applied to the work we do as educators.  In a lot of ways, questioning techniques are at the heart of what expert instructors do to assess their students and help them build understanding.  Paying attention to students’ facial expressions and body language can provide powerful informal assessment data that can guide instructional decision-making.  Collecting formative data, like hot readers do, can also help instructors plan student-centered lessons that target learning and support development.  While it’s clear that these “readers” have completely different motivations than educators, I think they have something to teach us.  While many of us focus on our own performances as instructors, the teaching and learning process is not a solitary activity.  It involves expert instructors engaging and “reading” their students.  That’s how truly magical educational experiences are created.

Are your assessments SMART?

I’m helping with a colleague’s class today and providing some technical advice to her students as they complete a digital storytelling project.  She sent me a copy of the assignment and I was really impressed by the project.  The directions were clear.  The grading guidelines were coherent and well developed.  My colleague is a great teacher, so I’m not really that surprised that the assignment was so strong.  Beyond being clear and understandable, however, the assignment lived up to one of the gold standards of assessments.  It was SMART.

Before readers start thinking that I’m evaluating the intelligence of my colleague or her assignment, SMART is an acronym that’s used to describe pedagogical processes like assessments and objectives.  SMART is a useful target for instructors as they start thinking about what they want their students to do and learn in their classes.

Specific:  I think some people worry about specific assessments because they want to be a little open-ended or have students use their creativity.  Being specific doesn’t mean corralling students’ creative spirit.  It also doesn’t mean giving specific page numbers or word counts.  Specific assessments are ones that provide enough information so students know exactly what’s expected of them.  It gives them a specific target to hit and a detailed goal to obtain.

Measurable:  I had a class in college where I submitted a paper and received a B.  When I met with the instructor and asked for ways to improve it, he simply said:  “Write differently.”  To this day, I don’t know exactly what he was examining when he assessed my paper.  For an assessment to be SMART, it must include measurable entities that are clearly outlined for students.  This doesn’t necessarily mean using rubrics but it does mean outlining grading criteria with enough depth so students understand how they’ll be evaluated.

Achievable:  I once had a colleague who would include an impossible problem on every exam she gave.  She explained that it was important that the students “knew I was smarter than them.”  An assessment isn’t our time to flex our intellectual muscle or to feed our own egos.  Assessments should be designed so students with mastery of the content can achieve the goal.

Relevant and Rigorous:  I’ve seen both of these included in different manifestations of the SMART acronym and they’re both important.  Assessment should connect to students’ lives and should help them stretch themselves intellectually.  When I design assessments, it is my hope that the students see the assignment’s applicability to their lives and their future careers and that they grow from the experience.

Time-Oriented:  I’ve seen this element described as both “timely” and “time-oriented.”  While the words have different connotations, I think the spirit is the same.  Assessments occur in time and we as instructors need to be conscious of that.  Our assessments need to evolve and be responsive to time.  What may work in a few years ago may not work today.  What is a great assignment for the spring semester may not resonate well in the winter.  More than these aspects, however, we also need to consider students’ time as we design assessments and make sure that the assignments we give students have clearly outlined due dates that are achievable based on their ability and their development.

Examining “what the pedagogy requires”

Educause recently published a comprehensive examination of technology ownership by collegiate students.  In the study of 3,000 students in 1,179 American institutions, Educause reported that almost 87% of undergraduate students owned laptops.  They also reported that 55% of students owned web-enabled “smartphones” that allowed connection to the Internet.  As a professor who teaches instructional technology courses and studies student use of technology, I see these statistics as a blessing and as a curse.  While these devices can provide tremendous opportunities for educators and their students, they also can create distractions in classrooms and have the ability to undermine assessment integrity.  Faced with these concerns, I understand when my colleagues ban the devices from their classrooms and teach in technology-free zones.  But also I hope (and work) for better.  One of the reasons I started this blog over two years ago was to help educators develop skills to utilize technology in meaningful ways in their classes.  In this blog, I try to highlight instructional strategies that are pedagogically sound and incorporate technology to engage students and foster new means of collaboration and learning.  While I feature a lot of different websites and tools on this blog, my focus is always on education and learning.  I came across this quote by Diana Laurillard recently that captures my philosophy of technology integration and the motivation behind my work:

“We have to be careful not to focus simply on what the technology offers, but rather on what the pedagogy requires.”

While I agree completely with Ms. Laurillard, before educators can effectively integrate technology, they need to have a grasp of what technology can offer and how it can support any pedagogical requirements.  This involves educators having more than a passing knowledge of the digital terrain.  They need to know how technology can support collaboration and communication and how it can help to form learning communities.  I don’t take a “technology first” approach when planning lessons for my students.  I don’t squeeze blogging artificially into some lesson just to have my students blog.  This is like having a hammer and treating every lesson like a nail.  I start with my instructional objectives and choose instructional strategies based on these objectives.   At times, this might mean utilizing no technology at all for a lesson.  Other times, it might involve developing a fully online lesson where students communicate via a chat room.  The important component, however, is that we as educators need to a have a variety of pedagogical strategies to draw upon when we plan and teach lessons.  We wouldn’t want to artificially squeeze blogging into a lesson any more than we would want to lecture to students every day.  We absolutely need to focus on what the pedagogy requires. But we also need to have a strong understanding of what technology can offer in order to make informed pedagogical decisions.