Seeing what others see

I have a confession. I have deuteranopia. Before you worry about my mental state or my health, let me explain. Deuteranopia is the scientific term for red-green color blindness. While it’s not really a health issue, it does create some challenges for me as I select outfits in the morning or select paint colors for the house.

My children are fascinated by my color 6or9blindness. I’ve tried to explain that it’s not that I don’t see red or green at all. I just don’t see them the same as they do. I can’t distinguish between different reds and different greens and I see a lot of things other people would characterize as browns and greys. Despite my explanations, they still enjoy watching me fail every one of the color blindness exams. I can’t find that hidden number among the colored dots and they find it hilarious.

Despite my explanations, they can never really understand that I see things differently than they do. The real truth, however, is that we ALL see things differently from one another. While I have troubles seeing reds and greens, we also see shades of blue differently than each other. You may remember the huge Internet controversy of 2015. What was the color of the dress?  Was it black? What it blue? Was it white? As the conversation waged across social media, one thing was clear. We see things differently.

Our differences in sight aren’t limited to colors, however. It is also impacted by other factors, too. Take this story I heard about Google Maps. Say you wanted to find the Arunachal Pradesh region near India with Google Maps on your smartphone. Did you know that the way that region would be projected would differ based upon where you were doing the search? In China, the region’s borders would look slightly different than if you searched for it in India. Over the years, there have been some geographical disputes about the borders of different regions near India and Google is required to show those regions differently based on if the smartphone user is in India, in China or someplace else.  That’s right. Someone in another part of the world would actually see the region completely differently than both the Indian and Chinese users. While this is a pretty extreme case, the Google Maps example demonstrates that how we see is based on a lot of factors. It could be huge geopolitical or technological factors. Maybe it’s a physical factor, like my deuteranopia. It could also be due to our backgrounds and lived experiences. Each of these informs how we see.

I learned a great word this week on the Radiolab podcast. Umwelt. The Radiolab hosts were discussing the visual range of the rainbow shrimp and how, despite its amazing cones and rods, it could not distinguish between colors well. In explaining that we shouldn’t pity the poor rainbow shrimp, the one host, Rod Krulwich, chalked it up to “umwelt” and explained that we all experience it. “You are limited by what you can feel, touch, see, and know,” Krulwich said, because of who you are. That’s umwelt.

I’ve been thinking about this for a bunch of reasons this week.  Nature has a funny way of connecting dots across different media. I guess it started with thinking about last week’s post on Bias in Your Online Class? While I’ve been digging into the statistics for my online class and trying to better understand my biases, I’ve also been wondering how my students are experiencing the classes themselves. How are THEY seeing the learning environment? How are THEY seeing my interactions with them? Even if I explicitly asked them about their experiences, it would be hard for me to know how they saw something.

And that brings me back to the Radiolab podcast. At the end, the other host, Jad Abumrad, says that because of umwelt, we can never really see what other people see. We just see things differently. “That’s the lonely part,” Abumrad explains. “The unlonely part is you can try.”

Here’s to trying to see what others see.


Bias in Your Online Class?

A few years ago, I came across an article in the New York Times Magazine that examined the avatars that individuals select when playing online games. Across the series of photos included with the article, different players are shown alongside their digital selves. For some, the likeness is amazingly similar. A man has digitally recreated himself down to his black suit and sunglasses. One woman has created an almost identical digital copy of herself down to the flowered pattern of her dress. For others, however, there’s a stark contrast. A middle-aged man portrays himself as teenage girl. Another represents himself as a robot. When I initially read the article, I thought about the power of the digital world and how we could craft our online identities. We could choose to be seen as we were or as we hoped to be. The online world could be a powerful equalizing and democratizing arena, allowing new voices to be heard and new people to participate. But I also worried how others interact to these digital representations. Does discrimination translate to a world of avatars and digital identities?

I was reminded of this article last week as I read a new study conducted by Stanford’s Center for Education Policy Analysis. Looking across 124 different online classes, researchers examined the student and instructor responses to discussion board posts based on the gender and race of the student initially posting. To conduct the study, the researchers created eight student profiles with names that were “connotative of a specific race and gender (i.e., White, Black, Chinese and Indian by gender).” In each of the online classes, researchers used each student profile to contribute a single discussion board post and monitored the responses from instructors and other students. Across all of the 992 posts that the researchers contributed (8 posts across 124 courses), instructors responded 7.0% of the time. Examining the instructor responses based on the racial and gender profiles of the students showed that instructors were more likely to respond to the “White male” students than others. Across the 124 classes, instructors responded to “White males” 12% of the time. Instructor responses were far lower for every other gender/race combination. Compared to the other student profiles, White males were 94% more likely to receive an instructor response than other students.

While these findings are troubling, the study also includes some promising signs too. Looking at the student responses, at least one student replied to 69.8% of the researchers’ posts and each post received an average of 3.2 student replies. While white female students were more likely to receive replies from other white female students, no other statistically significant findings could be made. Regardless of the gender and race of the student profile contributing the post, their online peers responded at similar rates.

As an online instructor, the research provides an important lens for me to view my own practice. Am I interacting with students in unbiased manners? Am I responding to my students’ posts in similar fashion? I spent a couple hours a few days ago looking at some recent online classes to see if I could find some trends in how I interacted with students and responded to their posts. Casually looking across the discussion forums, I didn’t see any clear trends but I’ve been devising a few ways to dig a little deeper into the data. Regardless of what I find, this research study has opened my eyes a great deal to the biases that can happen online. And maybe being aware of these biases is the first step to intentionally overcoming them.

Baker, R., Dee, T., Evans, B., & John, J. (2018). Bias in Online Classes: Evidence from a Field Experiment.

Improving my online classes with checklists

At some point in my online and face-to-face classes, I’ll ask my students to reflect on the journey so far and to provide feedback on ways that I can improve things. Since I’m almost entirely teaching online this academic year, I’m getting some real solid feedback from my students on ways that my online classes can be improved. Across all of the feedback, one suggestion stands out as the most requested improvement lately. Checklists.

From a learning science perspective, my students’ request for checklists absolutely makes sense. Checklists help students to be more metacognitive and to self-regulate their learning. Well-defined checklists can make expectations clear for students and help them monitor their progress in completing the expectations. When completing complex assignments, checklists can help students better understand the individual tasks embedded within the complexity.

Besides the direct connections to learning, checklists are also one of the ways to incorporate Universal Design for Learning (UDL) in your classes. One of the principles of UDL is “providing multiple means of action and expression.” This broad principle can be more easily understood when the supporting guidelines are considered. Checklists fall under the guideline for executive functioning and would help students “develop and act on plans to make the most out of learning” (CAST, 2018). Digger deeper into UDL, checklists help students set appropriate goals, strategically plan their work, manage course information and resources, and monitor their own progress. While checklists may seem like a simple strategy, it’s clear that they can have a huge impact on student learning.

The application of checklists to online learning environments is also pretty clear. Since so much of the instruction, interaction and assessment in an online class are mediated through technology, it’s easy for a student to miss things. A student could misread a due date or misunderstand an expectation. A checklist helps to reduce these missteps and provides supports for students to navigate the online space and do their best work.

I also think a lot about cognitive load when I create my online classes. Cognitive load is the amount of mental effort required to process information and learn something. While we talk about cognitive load as being a single entity, researchers actually identify three different types of cognitive load: germane, intrinsic, and extraneous. Germane cognitive load refers to the metal processes required to acquiring, automating and associating concepts in long-term memory. By contrast, intrinsic load describes the difficulty based on the concept being learned. Learning to add or subtract is much easier than learning differential equations. The processes have different intrinsic loads associated with them. Since we don’t typically control the cognitive difficulty of the content or the mental processes required to learn them, instructors don’t really have much control over germane or intrinsic cognitive load.

Extraneous load is a different story, though. Extraneous load describes the difficulty to learn something based on how it is presented. I’m sure we’ve all sat through lessons where our ability to concentrate was challenged. Maybe the teacher spoke with a monotone voice. Or maybe the presentation slides were so visually disorganized that they were hard to follow. Or maybe the lesson itself was poorly organized and disjointed. These examples showcase the power of extraneous load.

In a way, checklists can be considered as a way to reduce extraneous cognitive load. Checklists can clear up any disorganization and help to focus students’ attention on the critical activities they need to complete. After detailing the instructional impacts of checklists, it looks like I’m going to have to find the time to build them into my online classes.

CAST (2018). Universal Design for Learning Guidelines version 2.2. Retrieved from

Turning the Corner with 360-Degree Feedback

Last week, I shared a primer on feedback. I shared links to a variety of posts I’ve written over the years regarding how to provide good feedback to students and how to embrace “growth mindset” to support student learning. This week, I thought I’d introduce a strategy that I’ve been using with my Screen Shot 2018-02-27 at 2.36.45 PMstudents over the last few semesters: 360-Degree Feedback. Before we begin, let’s take a step back.  What is feedback? Hattie (2014) defines feedback as “information allowing a learner to reduce the gap between what is evident currently and what could or should be the case.” But where should that information come from? Traditionally, teachers have relied on some combination of instructor, peer or self-assessments to gather information and provide feedback. Used on their own, the picture can be incomplete. 360-Degree Feedback, however, draws on all three of these types of feedback to provide a more holistic view of a student’s work. Let me give an example.

I’ve been struggling with grouping students in my class. I’ve let students select their own groups and assigned students groups based on their academic standing and schedules. I’ve had students complete the Myers Briggs Type Indicator and assigned groups according to their personality types. Regardless of the method I used, at some point during the semester, I would be pulled in to mediate some serious group issue. After reading Dweck’s Mindset book, however, I realized that I needed to focus on supporting student growth rather than their fixed attributes.  Collaboration and teamwork are properties than can be taught. With the proper feedback and support, students’ abilities in these areas can grow and improve. To do this, however, their ability to work in a team would need to be assessed and feedback would need to be given. Here’s where 360-Degree Feedback comes in.

To support students’ growth as team members and as collaborators, I adopted the AAC&U Teamwork Value Rubric and introduced it during the first day of class. I also discussed the different qualities that made a positive and productive team and explained how we would be supporting each other’s growth during the class. To drive the growth concept home, I had the students self-assess their mindset and discussed the research on growth and fixed mindsets. After setting the stage about the importance of the growth mindset, I explained how we would assess our ability to work in a team at several points during the semester and we would use the feedback to improve.

To make the process work a little more smoothly, I had the students complete self and peer assessments through a Google form and then anonymized the information. I also added my assessment and met with groups and individuals to discuss their teamwork performance. These discussions were a little challenging. Some students weren’t working well as a team member but the rubric provided a more objective way to discuss their performance. I also reiterated that they could grow as a collaborator and that we would be reassessing things later in the semester.

In this situation, coordinating the students’ self-assessment with the ones from their peers and from me helped to provide a more complete picture of the students’ performance. By taking a “360 Degree” view of their work, I was able to support students’ growth as a team member. Looking at the data from the semester, students’ teamwork scores grew by about 4% from their midpoint to their final scores. More importantly, I didn’t have to resolve any group conflict issues during the semester!

While this post discusses one assignment where I’ve applied 360-Degree Feedback, I’ve also used it to support student writing and their research. If you’re thinking about getting started with 360-Degree Feedback, check out this great blog post that summarizes a webinar I gave a few weeks ago.

Foundations of Feedback

Later this week, a colleague and I are presenting a conference session on providing 360-Degree Feedback to students. With 360-Degree Feedback, instructors combine students’ self-assessment with peer and instructor feedback to provide more holistic support for students’ development. With 360-Degree Feedback, feedback doesn’t just come from a single source. Instead, assessment and feedback comes from differentiated but complementary sources. In a way, 360-Degree Feedback leverages the combined effects of several of the top influences that Jon Hattie examines in his meta-analyses.

I’m planning to write about 360-Degree Feedback in more depth down the road, but this week, I wanted to assemble all of the posts I’ve written on feedback and assessment over the years to provide a foundation for readers. Enjoy!

1. Mindset: A primer post This post introduces the concept of growth mindset and shares a bunch of resources to build a solid understanding of how critical feedback is for student development.

2. Teaching for Growth Building on the mindset concept, this post draws on James Lang’s book Small Teaching and discusses how you can Design for Growth, Communicate for Growth and provide Feedback for Growth.

3. The Power of Feedback Drawing on research from Turnitin, this post examines the impact that feedback has on student writing.

4. Glows and Grows This post examines two types of feedback (progress and discrepancy) and discusses how important it is to provide both when giving feedback to students.

5. Better Student Feedback with Classkick While this post focuses a lot on an app called Classkick, it also introduces Wiggins’ Seven Keys to Effective Feedback.

6. Lessons about teaching and learning from Star Wars This definitely qualifies as one of the nerdiest posts I’ve ever written. In this post, I examine how Star Wars is actually a good lens for which we can view assessment and feedback.

7. The Future of Assessment Wearing my “futurist” hat, I draw on Karl Kapp and Robin Kunicke’s concept of “juicy feedback.”

8. The Secret Sauce of Blended Success This post discusses how important formative assessment and feedback are to the blended learning environment.

9. Feedback and the Dunning-Kruger effect One of the challenges with students’ self-assessment is that students tend to evaluate their performance disproportionately to their ability. Ongoing, regular feedback from instructors can help students develop a truer sense of their academic development.


Giving Credit

Where do great ideas originate? I’m prone to saying that inspiration and creativity develops from the space between collaborators. Get some smart people together who are willing to brainstorm and problem solve and the group is bound to come up with some creative ideas. Who owns the idea that emerges? It grows from the space between us so it’s not really any body’s idea. It’s jointly owned. “If anyone deserves credit,” I remember saying, “it’s the space between us.”

But that’s really not true. The “space between us” isn’t a real person and it doesn’t have real feelings. The “space between us” doesn’t deserve validation for its work or need a pat on its back for a job well done. The “space between us” may be a great concept but the real credit should be directed at the specific people who were in the room. We need to identify specific people and praise their contributions. We need to shine a light on individual people.  When we give credit to whole groups, some people may feel left out and not get the credit they deserve. We’re probably all guilty of doing that at some point. But, when I give credit to “the space between,” the light shines on no one.

As often happens in my world, disparate ideas converge to help me make sense of things. In preparation for a presentation, I was doing some reading on peer grading and the potential biases that can emerge when allowing students to assess one another. Dochy, Segers and Sluijsmans (1999) outlined four potential biases that can occur in peer grading situations. Friendship marking occurs when students over-mark their friends (and under-mark others). Decibel marking occurs when the most vocal students receive the grades (without necessarily earning them). When students earn high marks without contributing, parasite marking occurs. Lastly, collusive marking happens when students collaborate to over-rate (or under-rate) their peers. Because of the prevalence of these biases, many instructors choose to avoid using peer assessment all together.

Thinking about their hesitation to incorporate peer assessment in their classes, I think most instructors worry that they may be giving inaccurate grades to students who don’t deserve them. In a way, avoiding peer grading parallels my “crediting the space between us.” While instructors want to avoid giving students grades that they didn’t deserve (either good or bad), I’m avoiding give credit to anyone specifically, whether they’ve earned it or not. Both practices are poor decisions borne out of our inability to effectively value (and validate) individual and collective efforts and achievements at the same time. One approach sacrifices the group for the individual.  While the other, sacrifices the individual for the group. Neither approach is ideal.

In classroom settings, I’ve tried to confront this by partnering individual and peer assessments together.  In some cases, I’ve even incorporated my own feedback to provide a more holistic assessment of student learning and development. In fact, I recently presented a webinar on 360 Degree Assessment for Magna Publications to share my work.  But that only addresses classroom environments.  What about my work with colleagues? How can celebrate the work and achievements of the individual and the group?

I wish I had the answer here. I know that I’m going to work harder to celebrate the achievements of the individuals and the groups in which they work. I’m going to shine the light less on “the space between” and give more credit to specific individuals. That’s my starting point.  I’ll let you know how it goes.

Five Stages of Online Learning?

As an online educator (and someone who researches online education), I’m always coming across new model to describe the online learning process. Personally, I gravitate to the Community of Inquiry framework because I see the need to foster social presence in online learning environments. A colleague shared another framework recently and I’m still working through its applicability.

In her 2013 book E-tivities: The Key to Active Online Learning, Gilly Salmon offers a five-stage model of e-moderation that scaffolds students through increasingly complex technological ability and interactivity. Salmon’s stage model is relatively new to me but I can see that, in many ways, it reflects how I create my online courses. To dig deeper into the model, I thought I’d outline each stage and discuss a little about the ways I meet (or don’t meet) each stage in my online classes.

Stage 1: Access and Motivation
This stage focuses on helping students understand the learning environment and how to technically engage with the different tools. In all of my online classes, I offer short online orientation that help students develop basic proficiency with the learning management system and understand how I plan to use.

Stage 2: On-line Socialization
Stage 2 targets developing a social space for students to interact with their peers and with the course instructor. In my online classes, I always include some sort of icebreaker to get the students sharing short introductions with one another.

Stage 3: Information Exchange
This stage has students interacting with course content and reflecting on what they’ve learned. To make this process a little more transparent in my online classes, I have students post short reading summaries before they begin discussing what they’ve learned with their peers (Stage 4).

Stage 4: Knowledge Construction
If Stage 3 is about accessing information, Stage 4 focuses on building knowledge through social collaboration. This stage is highly interactive with students sharing their ideas with one another. In my online classes, I usually post a few open-ended discussion board questions to foster conversations with the hopes that the class will use the content as a springboard for sharing additional ideas and content.

Stage 5: Development
If there’s a stage that I haven’t done a great job meeting, it’s Stage 5. This stage focuses on the students’ reflecting on and evaluating their own learning. The goal with this stage is to foster more independent learning and increased self-regulation. In a way, Stage 5 reminds me a little of Level 6 of Dee Fink’s Taxonomy of Significant Learning. In Fink’s taxonomy, Level 6 has students focus on the metacognitive process of “learning how to learn.” Across my online and face-to-face classes, I don’t feel like I offer enough opportunities for students to do this.  It definitely provides some opportunities for growth.

Regular readers know that I subscribe to the Community of Inquiry (COI) framework when I build and facilitate my online classes.  While I don’t necessarily see Salmon’s stage model replacing my use of the COI, I do see its applicability. I really like how the model focuses on students learning to navigate the technical aspects of their online classes before they gradually engage in more interactive processes in the class. This scaffolded approach is critical to online student success and reflects research I shared a few years ago about online orientations. For this reason alone, I feel like the Salmon’s stage model deserves a little more attention.