Foundations of Feedback

Later this week, a colleague and I are presenting a conference session on providing 360-Degree Feedback to students. With 360-Degree Feedback, instructors combine students’ self-assessment with peer and instructor feedback to provide more holistic support for students’ development. With 360-Degree Feedback, feedback doesn’t just come from a single source. Instead, assessment and feedback comes from differentiated but complementary sources. In a way, 360-Degree Feedback leverages the combined effects of several of the top influences that Jon Hattie examines in his meta-analyses.

I’m planning to write about 360-Degree Feedback in more depth down the road, but this week, I wanted to assemble all of the posts I’ve written on feedback and assessment over the years to provide a foundation for readers. Enjoy!

1. Mindset: A primer post This post introduces the concept of growth mindset and shares a bunch of resources to build a solid understanding of how critical feedback is for student development.

2. Teaching for Growth Building on the mindset concept, this post draws on James Lang’s book Small Teaching and discusses how you can Design for Growth, Communicate for Growth and provide Feedback for Growth.

3. The Power of Feedback Drawing on research from Turnitin, this post examines the impact that feedback has on student writing.

4. Glows and Grows This post examines two types of feedback (progress and discrepancy) and discusses how important it is to provide both when giving feedback to students.

5. Better Student Feedback with Classkick While this post focuses a lot on an app called Classkick, it also introduces Wiggins’ Seven Keys to Effective Feedback.

6. Lessons about teaching and learning from Star Wars This definitely qualifies as one of the nerdiest posts I’ve ever written. In this post, I examine how Star Wars is actually a good lens for which we can view assessment and feedback.

7. The Future of Assessment Wearing my “futurist” hat, I draw on Karl Kapp and Robin Kunicke’s concept of “juicy feedback.”

8. The Secret Sauce of Blended Success This post discusses how important formative assessment and feedback are to the blended learning environment.

9. Feedback and the Dunning-Kruger effect One of the challenges with students’ self-assessment is that students tend to evaluate their performance disproportionately to their ability. Ongoing, regular feedback from instructors can help students develop a truer sense of their academic development.

 

Advertisements

Giving Credit

Where do great ideas originate? I’m prone to saying that inspiration and creativity develops from the space between collaborators. Get some smart people together who are willing to brainstorm and problem solve and the group is bound to come up with some creative ideas. Who owns the idea that emerges? It grows from the space between us so it’s not really any body’s idea. It’s jointly owned. “If anyone deserves credit,” I remember saying, “it’s the space between us.”

But that’s really not true. The “space between us” isn’t a real person and it doesn’t have real feelings. The “space between us” doesn’t deserve validation for its work or need a pat on its back for a job well done. The “space between us” may be a great concept but the real credit should be directed at the specific people who were in the room. We need to identify specific people and praise their contributions. We need to shine a light on individual people.  When we give credit to whole groups, some people may feel left out and not get the credit they deserve. We’re probably all guilty of doing that at some point. But, when I give credit to “the space between,” the light shines on no one.

As often happens in my world, disparate ideas converge to help me make sense of things. In preparation for a presentation, I was doing some reading on peer grading and the potential biases that can emerge when allowing students to assess one another. Dochy, Segers and Sluijsmans (1999) outlined four potential biases that can occur in peer grading situations. Friendship marking occurs when students over-mark their friends (and under-mark others). Decibel marking occurs when the most vocal students receive the grades (without necessarily earning them). When students earn high marks without contributing, parasite marking occurs. Lastly, collusive marking happens when students collaborate to over-rate (or under-rate) their peers. Because of the prevalence of these biases, many instructors choose to avoid using peer assessment all together.

Thinking about their hesitation to incorporate peer assessment in their classes, I think most instructors worry that they may be giving inaccurate grades to students who don’t deserve them. In a way, avoiding peer grading parallels my “crediting the space between us.” While instructors want to avoid giving students grades that they didn’t deserve (either good or bad), I’m avoiding give credit to anyone specifically, whether they’ve earned it or not. Both practices are poor decisions borne out of our inability to effectively value (and validate) individual and collective efforts and achievements at the same time. One approach sacrifices the group for the individual.  While the other, sacrifices the individual for the group. Neither approach is ideal.

In classroom settings, I’ve tried to confront this by partnering individual and peer assessments together.  In some cases, I’ve even incorporated my own feedback to provide a more holistic assessment of student learning and development. In fact, I recently presented a webinar on 360 Degree Assessment for Magna Publications to share my work.  But that only addresses classroom environments.  What about my work with colleagues? How can celebrate the work and achievements of the individual and the group?

I wish I had the answer here. I know that I’m going to work harder to celebrate the achievements of the individuals and the groups in which they work. I’m going to shine the light less on “the space between” and give more credit to specific individuals. That’s my starting point.  I’ll let you know how it goes.

Glows and Grows

It’s nearing the end of the semester and I’m knee-deep in grading papers and projects. I’m also preparing for a faculty learning community (FLC) that I’m leading on the book Spark of Learning by Sarah Rose Cavanagh (2016). I know I’ve mentioned the book a bunch of times over the last year or so on this blog but I’m rereading it again in preparation for our FLC meeting later this week. It’s funny how different things about a text resonate upon rereading. Since I’m so focused on grading right now, a section on feedback really stood out to me.

Cavanagh discusses two types of feedback that are important to enhance student competence: progress feedback and discrepancy feedback. Progress feedback involves “giving feedback to students about what they’ve done right, particularly if it is a skill that they were previously lacking” (p. 132). Discrepancy feedback involves “providing information to students about what they’ve done wrong and areas performance that are lacking” (p. 131). To keep students engaged and motivated, Cavanagh suggests using both progress and discrepancy feedback when assessing student work. Surprisingly, however, educators tend to focus more on discrepancy feedback. Cavanagh cites work by Voerman, Korthagen, Meijer and Simons (2014) that studied seventy-eight secondary teachers and found that only 6.4% provided progress feedback when assessing student work. Cavanagh argues that by providing the balance between progress and discrepancy feedback will support students’ feeling of competency and the overall emotional tone of the classroom.

After reading this section, I thought about a system that I use when assessing students work. I wish I could take credit for developing it but it’s one of those processes that one acquires from working with so many smart and creative colleagues. It’s called Glows and Grows. For many assignments, I’ll focus my attention on what the student has done well (the Glows) and the areas of which student still needs to work (Grows). Since it’s so simple to understand and implement, it can be used with a variety of assignments. I’ve used it with student presentations, performances and papers. The strategy is also really easy to use with peer-assessments when paired with explicit assignment expectations. By focusing on just the glows and grows, students can provide informal feedback to their peers without worrying scoring rubrics or letter grades.

Returning to Cavanagh’s discussion of progress and discrepancy feedback, it’s clear that a strategy like Glows and Grows provides a more balanced approach to providing feedback. While it’s a simplistic strategy, Glows and Grows offers students a clear picture of what they’ve done right while still identifying areas that they need to improve. I have to admit that I shared this strategy with a colleague yesterday and was playfully admonished for the way that “education people” talk. Sure, the rhyming and alliteration in the Glows and Grows name makes it seem elementary, but that’s part of its charm (from my perspective). The simplistic title makes it more accessible to students and helps them let their guard down and be more open and responsive to feedback.

References:

Cavanagh, S. R. (2016). The Spark of Learning: Energizing the College Classroom with the Science of Emotion. West Virginia University Press.

Voerman, L., Korthagen, F. A., Meijer, P. C., & Simons, R. J. (2014). Feedback revisited: Adding perspectives based on positive psychology. Implications for theory and classroom practice. Teaching and Teacher Education, 43, 91-98.

 

Feedback and the Dunning-Kruger effect

I’m a podcast junkie. Since I spend over an hour commuting to and from campus each day, I choose to use that time to listen to smart people teach me about cool stuff. In a recent This American Life episode titled In Defense of Ignorance, I learned about the Dunning-Kruger effect and its powerful impact on learning. While I’m not going to necessarily “defend ignorance” here, I am going to discuss how our students’ novice can impact their metacognitive abilities and how important it is to provide strong feedback for improvement.

The Dunning-Kruger effect was first introduced in a 1999 study published in the Journal of Personality and Social Psychology.  The researchers (Justin Kruger and David Dunning) performed four studies to examine students’ abilities to self-evaluate their performance on different assessments.  After taking a test on logical reasoning, grammar or humor, participants were asked to assess their overall test score and to rate their performance against those of their peers.  Across the study, students who performed in the bottom quartile of the survey group consistently perceived their test score and performance relative to their peers as far greater than they actually performed.  As the authors write, “participants in general overestimated their ability with those in the bottom quartile demonstrating the greatest miscalibration” (p. 1125).

To some, the presence of the Dunning-Kruger effect may be surprising or eye opening. For those of us who have been teaching for a while, however, we can probably recognize this phenomenon in practice.  We’ve all encountered students who thought they’ve done really well on exam before confronting the stark reality of a low grade being handed to them. Charles Darwin captures it best in The Descent of Man when he writes, “ignorance more frequently begets confidence than does knowledge.”  Students don’t always know what they don’t know.

That’s why using formative assessments and providing feedback is so important.  In the Kruger and Dunning’s study, they discuss that the negative feedback from grades as offering little support for participants’ growth. Kruger and Dunning write, “Although our analysis suggests that incompetent individuals are unable to spot their poor performances themselves, one would have thought negative feedback would have been inevitable at some point in their academic career” (p. 1131).  But that’s not how teaching and learning should work.  As educators, we need to help our students develop the metacognitive abilities to self-assess their knowledge base and performance. We have to help students better recognize their areas of strength and weakness and provide feedback to close the gaps in their performance.  As novices in our content area, they will not have the ability to readily identify what they know from what they don’t know.  By offering ongoing formative assessment, however, we can provide those developmental markers that can help guide students and have them better overcome the gaps in their learning.  While the Dunning and Kruger article identifies individuals as “ignorant” or “incompetent,” I’d prefer to view them as “learners” and provide the necessary feedback and supports to help them be successful in my classroom.

 

 

The Future of Assessment

I’ve been thinking a lot about assessment lately.  On campus, departments have been preparing their annual assessment reports that demonstrate how student learning outcomes are being assessed programmatically.  I’m also helping to plan an assessment workshop for colleagues to broaden the strategies they use to assess their students’ learning.  Across my different roles and activities, it seems a little like I’ve landed in “Assessment Land.”

Assessment Land isn’t a horrible place. In fact, assessment is a really critical aspect of what we do as educators.  We need to successfully assess student learning so we can provide feedback that leads to improvement.  Our assessments are also important because they can help communicate to outside accrediting bodies that our students have developed the competencies required for their desired fields.  During these assessment discussions, however, I’ve been wondering what the future of assessment is going to look like. While there will undoubtedly be a shift from traditional paper and pencil measures, with what will they be replaced? It’s easy to say that future assessments will involve technology in some way.  But I worry about what that will look like.  For instance, my ten year-old son came home last week and complained about a new assessment system his elementary school was using.  After doing a little research, I found that the system involves answering multiple-choice “diagnostic” questions that would help to inform how his teacher would plan individualized instruction.  While individualized instruction is a respectable goal, when I spoke with his teacher recently, she said sometimes “it feels like we’re assessing more than we’re teaching.” If that’s the future of assessment, there are difficult days ahead.

Thankfully, there are other voices that are helping to offer other visions of the future.  Take the 2016 National Education Technology Plan (NETP) developed by the US Office of Educational Technology.   The plan outlines some characteristics of “next generation assessments.”  Next gen assessments would leverage technology to enable more “flexibility, responsiveness and contextualization.”  Instead of occurring after learning, next gen assessments would be embedded throughout the learning process and offer feedback in real time.  The assessment would also be designed universally so that all students could participate on an equal footing.  Rather than simple multiple choice questions, next gen assessments could leverage video and audio tools to tap into more complex means of demonstrating learning.  Lastly, next gen assessments will be adaptive and respond and evolve depending on students’ knowledge base and learning needs.

While the NETP offers a great vision for the future of assessments, I’d like to share another voice.  Recently I reread Karl Kapp’s 2012 book The Gamification of Learning Instruction.  While I’ve read the book several times, I find that different parts resonate with me each time.  This reading, his section on the game element of feedback stood out.  Since assessment and feedback are so closely link pedagogically, I kept envisioning how his view of feedback could inform future assessment design.  In the book, Kapp discusses game designer Robin Hunicke’s construct of “juicy feedback.” I honestly love the term “juicy feedback” but I love the characteristics that juicy feedback involves even more.  Twisting this a bit, I offer “juicy assessment” as a possible future.  Like juicy feedback, juicy assessment would be a sensory experience that coherently captures the outcomes and objectives it’s intended to assess.  It would be a continuous process that emerged from students’ work and involve provide balanced feedback that was actionable.  Most importantly, juicy assessment would be inviting and fresh, offering means and metrics that motivated and engaged.

While we’re presented with glimpses of the future of assessment, the visions couldn’t be more different. One sees technology as a means of efficiently measuring large numbers of students in an almost industrial way.  The other leverages technology to expand when and how we assess individual students, tailoring strategies to students’ needs and broadening what counts as evidence of student learning.  I honestly don’t know which future will come to fruition but I’m hopeful that Assessment Land will continue to be a place that I enjoy visiting.

References:

Kapp, K. M. (2012). The gamification of learning and instruction: game-based methods and strategies for training and education. San Francisco: John Wiley & Sons.

Office of Educational Technology. (2016). National Education Technology Plan – Future Ready Learning: Reimagining the Role of Technology in Education. Washington, D.C: U.S. Department of Education.

Incompatible Beliefs?

This weekend, I read a lengthy news report that examined the performance on state assessments for several local schools. The report included all sorts of measurable aspects – teacher salaries, per pupil expenditures, SAT scores and so much more. In each area, the report ranked the schools according to each factor so that readers can easily see which schools were at the top (or bottom) of each category.

Looking across the report, some things stood out. For instance, the schools that had the greatest percentage of ESL (English as a Second Language) and Special Education students had some of the lowest overall scores on state assessments.  Not surprisingly, these schools were also some lowest funded schools and had the highest rates of students living in poverty. While I recognize that there are many factors that influence assessment data, the report points out some important incompatible beliefs that most educational leaders hold.

If you would survey school leaders and state officials, most would espouse the importance of differentiation in education. Considering the diversity of students who enter classrooms, they argue, educators can’t teach every student the same way and expect the same results. Students come to schools with a variety of needs and abilities and it’s important for educators to differentiate their instructional techniques to support their students’ learning. Consider this scenario. It’s quite possible that a teacher could be working with a blind student, an ESL student and a student with a learning disability in the same classroom. Obviously, teaching the content in the same way to all three of these students wouldn’t yield similar results.  Although it can challenging to vary instructional approaches with the diversity of students present in schools, few would argue with the basic need to differentiate instruction.

Alongside the widespread support for differentiation, there is another fundamental belief that most educational leaders hold – the need for academic standards. Decades ago, local schools adopted their own curricula and course of study. This approach, however, created inconsistency in expectations across schools. Leaders argued that it was difficult to identify which schools were successfully educating their students and which ones were not. Now, states provide detailed standards for almost every content area. These standards outline the academic targets towards which students and teachers should be working. Standards, leaders argue, provide guidance for teachers and help to create consistent and rigorous expectations across schools.

But that’s where the challenge lies. While educational leaders espouse the importance of differentiation and standardization, they don’t see the underlying incompatibility with these beliefs. The incompatibility, however, doesn’t originate in standardization and differentiation philosophically. Instead, the issue stems from standardizing assessment. Clearly, it’s possible to have similar expectations for students as long as educators teach them in different ways. The challenge is that we can’t turn around and then say that every student has to demonstrate what he or she has learned in the same way. Let’s return to the comparative data that was shared in the education report I referenced earlier in this post. In our state, every ESL student must take standardized assessments in English. No matter how much teachers differentiate their instruction to help these students meet the academic standards, if they can’t demonstrate what they’ve learned in English, they will fail. If we widen the lens to include the diversity of students taking standardized tests, the data shared in these educational reports becomes a little clearer. As educators, we need to negotiate these beliefs of standardization and differentiation to make better decisions on behalf the students with whom we work.

Penn & Teller teach assessment

Last week, I attended a conference in Las Vegas.  I’m not the biggest fan of the “Sin City” and I usually avoid it as a conference location.  This conference, however, was one of the premier events for faculty and administrators involved in teacher education and I really wanted to attend.  So, I overcame my apprehensions and biases and traveled to the land of neon.  I was fortunate in that I had several colleagues who also braved the trip.  Their presence definitely made the experience more enjoyable.

On our second night in Vegas, one of my colleagues wanted to see a show.  While I’m not really into Brittany Spears, Cher, Crique du Soleil or many of the other shows along the strip, one option really stood out:  Penn & Teller.  In my eyes, the magicians are the quintessential performers.  Not only do they astound the audience with feats of amazement but they also let them in on the act.  In their show, they tell the audience how certain stunts are conducted and how “charlatans” and “fakes” fool people with slight of hand.  Their indifference to the larger magician community is really refreshing.

The one trick that amazed me the most was a stunt the pair performed with boxes of joke books.  They sent ushers into the audience with joke books and asked two attendees to each grab a book out of the box.  The books were then passed around the audience randomly until Penn called for whoever was currently holding the book to open the book and select a joke to silently read to themselves.  Penn then “psychically” determined the joke that each of the audience members had selected.  It was pretty impressive.  While the audience was in awe from the performance, Penn explained that the secret to each of the tricks could be found online by searching for “cold reading” and “hot reading.” Which is exactly what I did.  Here’s what I found.

Hucksters who present themselves as mystics, psychics, mediums and mentalists employ both of the “reading” techniques.  Put simply, cold reading involves the “reader” analyzing visible characteristics of a person and asking leading questions that can inform their practice, whether it involves communicating with the dead, foretelling the future or guessing a selected joke in a joke book.  As the reader asks questions, he watches for slight changes in expressions and body language that guides the next question he asks.  By asking the right questions and closely monitoring the person, the reader can hone in on important data that can be used.

Hot reading, on the other hand, involves the reader collecting some background information about a person, usually subversively, and using that information to guide their actions.  Searching online, I found stories where audience members were asked to complete short surveys prior to a psychic show that were later used to guide the psychic’s “predictions.”  One medium reportedly recorded attendees’ conversations prior to the show and placed surrogates in the audience to eavesdrop on nearby people.  When the performer claimed to have heard from a person’s dead aunt, for instance, they already knew much of the information they needed to guide the ruse.

While both of the readings are simple to explain, seeing them play out in crowded theater is still really amazing.  On my flight back from the conference, I kept thinking about cold and hot reading and how the techniques could be applied to the work we do as educators.  In a lot of ways, questioning techniques are at the heart of what expert instructors do to assess their students and help them build understanding.  Paying attention to students’ facial expressions and body language can provide powerful informal assessment data that can guide instructional decision-making.  Collecting formative data, like hot readers do, can also help instructors plan student-centered lessons that target learning and support development.  While it’s clear that these “readers” have completely different motivations than educators, I think they have something to teach us.  While many of us focus on our own performances as instructors, the teaching and learning process is not a solitary activity.  It involves expert instructors engaging and “reading” their students.  That’s how truly magical educational experiences are created.