The Misconception of Kindness

I get mixed reviews on Rate My Professors. For every student who rates me well, there’s another student or two who has rated me poorly. I try to not get too worked up over the ratings. For the most part, they’re sort of like Yelp reviews. People only really post a review on Yelp when their experiences are amazingly good or amazingly bad. The vast majority of people who had a completely ordinary and solid dining event will never review their experience at all. And I think most people would tolerate a solid experience over a negative one. But I digress.

Returning to my Rate My Professor reviews, to me, one comment stands out among the ratings.  One student posted:

“His feedback is very blunt and to the point, so be prepared for that.”

I don’t know what motivated this student to write this or to give me a poor rating, but I’ve thought a lot about that comment over the last two years. For the most part, I think the student’s assessment of my feedback is on the mark. I also wonder whether that’s the reason that some of my undergraduate students don’t find me particularly empathetic. At least that’s what some of my student evaluations say.  And I find it troubling.  Here’s why.

Over the years on this blog, I written many posts dedicated to providing quality feedback to support students’ growth. Across all of the posts, however, there’s never been a real dedicated focus on how students’ receive feedback. I’m a big subscriber to Grant Wiggins’ Seven Key Elements to Effective Feedback.  To foster student learning and development, Wiggins writes, teacher feedback must reflect seven essential elements:

  • Effective instructor feedback is goal-referenced.
  • Effective instructor feedback is tangible and transparent.
  • Effective instructor feedback is actionable.
  • Effective instructor feedback is timely.
  • Effective instructor feedback is ongoing.
  • Effective instructor feedback is consistent
  • Effective instructor feedback progresses towards a goal

And I provide that feedback. My worry, however, is that some students are not used to getting this type of in-depth feedback and don’t know how to respond to it emotionally. When students are accustomed to getting a few check marks on their papers and a “Great job!” written at the end, they see the professor who provides detailed feedback for growth as being the outlier. They rate the professor as being blunt and to the point and not having much empathy. To some degree, my students see me as being unkind with my feedback.

Being the hyper-reflective teacher that I am, I’ve thought a lot about this and I think there is a prevailing misconception of kindness, one that trades long-term impacts for the short-term ones. Let me explain.

Take the student who gets the “Great job!” on their paper but receives little other substantive comments from her professor. The student is receiving feedback that probably feels good. It reinforces her perceptions of the amount of work that she’s dedicated and her perceptions of her ability. She probably sees the professor as being kind and supportive.

But this is only a short-term emotion with short-term impacts. If the student’s work is not really high quality, the student will eventually reach some place in her educational journey where her development or progress will be stunted. She’ll reach a point where she sees that she may lack the skills to succeed at the expected level. She’ll recognize that her education hadn’t prepared her for that next step.

But I tend to focus on long-term impacts. While I’m (mostly) okay with students calling me direct or blunt or lacking empathy, I hope they’ll realize at some point down the road that the detailed feedback I gave wasn’t trying to hurt their feelings but was intended to help prepare them for whatever comes next. That’s long-term kindness.

I heard someone say recently that “Frustration isn’t part of learning.  It IS learning.” And maybe that’s the motto I need to share with more of my students. I know that the direct (and blunt) feedback I give to students can be frustrating at times. But it’s hardly unkind.


Turning the Corner with 360-Degree Feedback

Last week, I shared a primer on feedback. I shared links to a variety of posts I’ve written over the years regarding how to provide good feedback to students and how to embrace “growth mindset” to support student learning. This week, I thought I’d introduce a strategy that I’ve been using with my Screen Shot 2018-02-27 at 2.36.45 PMstudents over the last few semesters: 360-Degree Feedback. Before we begin, let’s take a step back.  What is feedback? Hattie (2014) defines feedback as “information allowing a learner to reduce the gap between what is evident currently and what could or should be the case.” But where should that information come from? Traditionally, teachers have relied on some combination of instructor, peer or self-assessments to gather information and provide feedback. Used on their own, the picture can be incomplete. 360-Degree Feedback, however, draws on all three of these types of feedback to provide a more holistic view of a student’s work. Let me give an example.

I’ve been struggling with grouping students in my class. I’ve let students select their own groups and assigned students groups based on their academic standing and schedules. I’ve had students complete the Myers Briggs Type Indicator and assigned groups according to their personality types. Regardless of the method I used, at some point during the semester, I would be pulled in to mediate some serious group issue. After reading Dweck’s Mindset book, however, I realized that I needed to focus on supporting student growth rather than their fixed attributes.  Collaboration and teamwork are properties than can be taught. With the proper feedback and support, students’ abilities in these areas can grow and improve. To do this, however, their ability to work in a team would need to be assessed and feedback would need to be given. Here’s where 360-Degree Feedback comes in.

To support students’ growth as team members and as collaborators, I adopted the AAC&U Teamwork Value Rubric and introduced it during the first day of class. I also discussed the different qualities that made a positive and productive team and explained how we would be supporting each other’s growth during the class. To drive the growth concept home, I had the students self-assess their mindset and discussed the research on growth and fixed mindsets. After setting the stage about the importance of the growth mindset, I explained how we would assess our ability to work in a team at several points during the semester and we would use the feedback to improve.

To make the process work a little more smoothly, I had the students complete self and peer assessments through a Google form and then anonymized the information. I also added my assessment and met with groups and individuals to discuss their teamwork performance. These discussions were a little challenging. Some students weren’t working well as a team member but the rubric provided a more objective way to discuss their performance. I also reiterated that they could grow as a collaborator and that we would be reassessing things later in the semester.

In this situation, coordinating the students’ self-assessment with the ones from their peers and from me helped to provide a more complete picture of the students’ performance. By taking a “360 Degree” view of their work, I was able to support students’ growth as a team member. Looking at the data from the semester, students’ teamwork scores grew by about 4% from their midpoint to their final scores. More importantly, I didn’t have to resolve any group conflict issues during the semester!

While this post discusses one assignment where I’ve applied 360-Degree Feedback, I’ve also used it to support student writing and their research. If you’re thinking about getting started with 360-Degree Feedback, check out this great blog post that summarizes a webinar I gave a few weeks ago.

Foundations of Feedback

Later this week, a colleague and I are presenting a conference session on providing 360-Degree Feedback to students. With 360-Degree Feedback, instructors combine students’ self-assessment with peer and instructor feedback to provide more holistic support for students’ development. With 360-Degree Feedback, feedback doesn’t just come from a single source. Instead, assessment and feedback comes from differentiated but complementary sources. In a way, 360-Degree Feedback leverages the combined effects of several of the top influences that Jon Hattie examines in his meta-analyses.

I’m planning to write about 360-Degree Feedback in more depth down the road, but this week, I wanted to assemble all of the posts I’ve written on feedback and assessment over the years to provide a foundation for readers. Enjoy!

1. Mindset: A primer post This post introduces the concept of growth mindset and shares a bunch of resources to build a solid understanding of how critical feedback is for student development.

2. Teaching for Growth Building on the mindset concept, this post draws on James Lang’s book Small Teaching and discusses how you can Design for Growth, Communicate for Growth and provide Feedback for Growth.

3. The Power of Feedback Drawing on research from Turnitin, this post examines the impact that feedback has on student writing.

4. Glows and Grows This post examines two types of feedback (progress and discrepancy) and discusses how important it is to provide both when giving feedback to students.

5. Better Student Feedback with Classkick While this post focuses a lot on an app called Classkick, it also introduces Wiggins’ Seven Keys to Effective Feedback.

6. Lessons about teaching and learning from Star Wars This definitely qualifies as one of the nerdiest posts I’ve ever written. In this post, I examine how Star Wars is actually a good lens for which we can view assessment and feedback.

7. The Future of Assessment Wearing my “futurist” hat, I draw on Karl Kapp and Robin Kunicke’s concept of “juicy feedback.”

8. The Secret Sauce of Blended Success This post discusses how important formative assessment and feedback are to the blended learning environment.

9. Feedback and the Dunning-Kruger effect One of the challenges with students’ self-assessment is that students tend to evaluate their performance disproportionately to their ability. Ongoing, regular feedback from instructors can help students develop a truer sense of their academic development.


Giving Credit

Where do great ideas originate? I’m prone to saying that inspiration and creativity develops from the space between collaborators. Get some smart people together who are willing to brainstorm and problem solve and the group is bound to come up with some creative ideas. Who owns the idea that emerges? It grows from the space between us so it’s not really any body’s idea. It’s jointly owned. “If anyone deserves credit,” I remember saying, “it’s the space between us.”

But that’s really not true. The “space between us” isn’t a real person and it doesn’t have real feelings. The “space between us” doesn’t deserve validation for its work or need a pat on its back for a job well done. The “space between us” may be a great concept but the real credit should be directed at the specific people who were in the room. We need to identify specific people and praise their contributions. We need to shine a light on individual people.  When we give credit to whole groups, some people may feel left out and not get the credit they deserve. We’re probably all guilty of doing that at some point. But, when I give credit to “the space between,” the light shines on no one.

As often happens in my world, disparate ideas converge to help me make sense of things. In preparation for a presentation, I was doing some reading on peer grading and the potential biases that can emerge when allowing students to assess one another. Dochy, Segers and Sluijsmans (1999) outlined four potential biases that can occur in peer grading situations. Friendship marking occurs when students over-mark their friends (and under-mark others). Decibel marking occurs when the most vocal students receive the grades (without necessarily earning them). When students earn high marks without contributing, parasite marking occurs. Lastly, collusive marking happens when students collaborate to over-rate (or under-rate) their peers. Because of the prevalence of these biases, many instructors choose to avoid using peer assessment all together.

Thinking about their hesitation to incorporate peer assessment in their classes, I think most instructors worry that they may be giving inaccurate grades to students who don’t deserve them. In a way, avoiding peer grading parallels my “crediting the space between us.” While instructors want to avoid giving students grades that they didn’t deserve (either good or bad), I’m avoiding give credit to anyone specifically, whether they’ve earned it or not. Both practices are poor decisions borne out of our inability to effectively value (and validate) individual and collective efforts and achievements at the same time. One approach sacrifices the group for the individual.  While the other, sacrifices the individual for the group. Neither approach is ideal.

In classroom settings, I’ve tried to confront this by partnering individual and peer assessments together.  In some cases, I’ve even incorporated my own feedback to provide a more holistic assessment of student learning and development. In fact, I recently presented a webinar on 360 Degree Assessment for Magna Publications to share my work.  But that only addresses classroom environments.  What about my work with colleagues? How can celebrate the work and achievements of the individual and the group?

I wish I had the answer here. I know that I’m going to work harder to celebrate the achievements of the individuals and the groups in which they work. I’m going to shine the light less on “the space between” and give more credit to specific individuals. That’s my starting point.  I’ll let you know how it goes.

Glows and Grows

It’s nearing the end of the semester and I’m knee-deep in grading papers and projects. I’m also preparing for a faculty learning community (FLC) that I’m leading on the book Spark of Learning by Sarah Rose Cavanagh (2016). I know I’ve mentioned the book a bunch of times over the last year or so on this blog but I’m rereading it again in preparation for our FLC meeting later this week. It’s funny how different things about a text resonate upon rereading. Since I’m so focused on grading right now, a section on feedback really stood out to me.

Cavanagh discusses two types of feedback that are important to enhance student competence: progress feedback and discrepancy feedback. Progress feedback involves “giving feedback to students about what they’ve done right, particularly if it is a skill that they were previously lacking” (p. 132). Discrepancy feedback involves “providing information to students about what they’ve done wrong and areas performance that are lacking” (p. 131). To keep students engaged and motivated, Cavanagh suggests using both progress and discrepancy feedback when assessing student work. Surprisingly, however, educators tend to focus more on discrepancy feedback. Cavanagh cites work by Voerman, Korthagen, Meijer and Simons (2014) that studied seventy-eight secondary teachers and found that only 6.4% provided progress feedback when assessing student work. Cavanagh argues that by providing the balance between progress and discrepancy feedback will support students’ feeling of competency and the overall emotional tone of the classroom.

After reading this section, I thought about a system that I use when assessing students work. I wish I could take credit for developing it but it’s one of those processes that one acquires from working with so many smart and creative colleagues. It’s called Glows and Grows. For many assignments, I’ll focus my attention on what the student has done well (the Glows) and the areas of which student still needs to work (Grows). Since it’s so simple to understand and implement, it can be used with a variety of assignments. I’ve used it with student presentations, performances and papers. The strategy is also really easy to use with peer-assessments when paired with explicit assignment expectations. By focusing on just the glows and grows, students can provide informal feedback to their peers without worrying scoring rubrics or letter grades.

Returning to Cavanagh’s discussion of progress and discrepancy feedback, it’s clear that a strategy like Glows and Grows provides a more balanced approach to providing feedback. While it’s a simplistic strategy, Glows and Grows offers students a clear picture of what they’ve done right while still identifying areas that they need to improve. I have to admit that I shared this strategy with a colleague yesterday and was playfully admonished for the way that “education people” talk. Sure, the rhyming and alliteration in the Glows and Grows name makes it seem elementary, but that’s part of its charm (from my perspective). The simplistic title makes it more accessible to students and helps them let their guard down and be more open and responsive to feedback.


Cavanagh, S. R. (2016). The Spark of Learning: Energizing the College Classroom with the Science of Emotion. West Virginia University Press.

Voerman, L., Korthagen, F. A., Meijer, P. C., & Simons, R. J. (2014). Feedback revisited: Adding perspectives based on positive psychology. Implications for theory and classroom practice. Teaching and Teacher Education, 43, 91-98.


Feedback and the Dunning-Kruger effect

I’m a podcast junkie. Since I spend over an hour commuting to and from campus each day, I choose to use that time to listen to smart people teach me about cool stuff. In a recent This American Life episode titled In Defense of Ignorance, I learned about the Dunning-Kruger effect and its powerful impact on learning. While I’m not going to necessarily “defend ignorance” here, I am going to discuss how our students’ novice can impact their metacognitive abilities and how important it is to provide strong feedback for improvement.

The Dunning-Kruger effect was first introduced in a 1999 study published in the Journal of Personality and Social Psychology.  The researchers (Justin Kruger and David Dunning) performed four studies to examine students’ abilities to self-evaluate their performance on different assessments.  After taking a test on logical reasoning, grammar or humor, participants were asked to assess their overall test score and to rate their performance against those of their peers.  Across the study, students who performed in the bottom quartile of the survey group consistently perceived their test score and performance relative to their peers as far greater than they actually performed.  As the authors write, “participants in general overestimated their ability with those in the bottom quartile demonstrating the greatest miscalibration” (p. 1125).

To some, the presence of the Dunning-Kruger effect may be surprising or eye opening. For those of us who have been teaching for a while, however, we can probably recognize this phenomenon in practice.  We’ve all encountered students who thought they’ve done really well on exam before confronting the stark reality of a low grade being handed to them. Charles Darwin captures it best in The Descent of Man when he writes, “ignorance more frequently begets confidence than does knowledge.”  Students don’t always know what they don’t know.

That’s why using formative assessments and providing feedback is so important.  In the Kruger and Dunning’s study, they discuss that the negative feedback from grades as offering little support for participants’ growth. Kruger and Dunning write, “Although our analysis suggests that incompetent individuals are unable to spot their poor performances themselves, one would have thought negative feedback would have been inevitable at some point in their academic career” (p. 1131).  But that’s not how teaching and learning should work.  As educators, we need to help our students develop the metacognitive abilities to self-assess their knowledge base and performance. We have to help students better recognize their areas of strength and weakness and provide feedback to close the gaps in their performance.  As novices in our content area, they will not have the ability to readily identify what they know from what they don’t know.  By offering ongoing formative assessment, however, we can provide those developmental markers that can help guide students and have them better overcome the gaps in their learning.  While the Dunning and Kruger article identifies individuals as “ignorant” or “incompetent,” I’d prefer to view them as “learners” and provide the necessary feedback and supports to help them be successful in my classroom.



The Future of Assessment

I’ve been thinking a lot about assessment lately.  On campus, departments have been preparing their annual assessment reports that demonstrate how student learning outcomes are being assessed programmatically.  I’m also helping to plan an assessment workshop for colleagues to broaden the strategies they use to assess their students’ learning.  Across my different roles and activities, it seems a little like I’ve landed in “Assessment Land.”

Assessment Land isn’t a horrible place. In fact, assessment is a really critical aspect of what we do as educators.  We need to successfully assess student learning so we can provide feedback that leads to improvement.  Our assessments are also important because they can help communicate to outside accrediting bodies that our students have developed the competencies required for their desired fields.  During these assessment discussions, however, I’ve been wondering what the future of assessment is going to look like. While there will undoubtedly be a shift from traditional paper and pencil measures, with what will they be replaced? It’s easy to say that future assessments will involve technology in some way.  But I worry about what that will look like.  For instance, my ten year-old son came home last week and complained about a new assessment system his elementary school was using.  After doing a little research, I found that the system involves answering multiple-choice “diagnostic” questions that would help to inform how his teacher would plan individualized instruction.  While individualized instruction is a respectable goal, when I spoke with his teacher recently, she said sometimes “it feels like we’re assessing more than we’re teaching.” If that’s the future of assessment, there are difficult days ahead.

Thankfully, there are other voices that are helping to offer other visions of the future.  Take the 2016 National Education Technology Plan (NETP) developed by the US Office of Educational Technology.   The plan outlines some characteristics of “next generation assessments.”  Next gen assessments would leverage technology to enable more “flexibility, responsiveness and contextualization.”  Instead of occurring after learning, next gen assessments would be embedded throughout the learning process and offer feedback in real time.  The assessment would also be designed universally so that all students could participate on an equal footing.  Rather than simple multiple choice questions, next gen assessments could leverage video and audio tools to tap into more complex means of demonstrating learning.  Lastly, next gen assessments will be adaptive and respond and evolve depending on students’ knowledge base and learning needs.

While the NETP offers a great vision for the future of assessments, I’d like to share another voice.  Recently I reread Karl Kapp’s 2012 book The Gamification of Learning Instruction.  While I’ve read the book several times, I find that different parts resonate with me each time.  This reading, his section on the game element of feedback stood out.  Since assessment and feedback are so closely link pedagogically, I kept envisioning how his view of feedback could inform future assessment design.  In the book, Kapp discusses game designer Robin Hunicke’s construct of “juicy feedback.” I honestly love the term “juicy feedback” but I love the characteristics that juicy feedback involves even more.  Twisting this a bit, I offer “juicy assessment” as a possible future.  Like juicy feedback, juicy assessment would be a sensory experience that coherently captures the outcomes and objectives it’s intended to assess.  It would be a continuous process that emerged from students’ work and involve provide balanced feedback that was actionable.  Most importantly, juicy assessment would be inviting and fresh, offering means and metrics that motivated and engaged.

While we’re presented with glimpses of the future of assessment, the visions couldn’t be more different. One sees technology as a means of efficiently measuring large numbers of students in an almost industrial way.  The other leverages technology to expand when and how we assess individual students, tailoring strategies to students’ needs and broadening what counts as evidence of student learning.  I honestly don’t know which future will come to fruition but I’m hopeful that Assessment Land will continue to be a place that I enjoy visiting.


Kapp, K. M. (2012). The gamification of learning and instruction: game-based methods and strategies for training and education. San Francisco: John Wiley & Sons.

Office of Educational Technology. (2016). National Education Technology Plan – Future Ready Learning: Reimagining the Role of Technology in Education. Washington, D.C: U.S. Department of Education.