It seems like a great time to run this post from a few years ago. This post originally appeared on the 8 Blog in March 2018.
A few years ago, I came across an article in the New York Times Magazine that examined the avatars that individuals select when playing online games. Across the series of photos included with the article, different players are shown alongside their digital selves. For some, the likeness is amazingly similar. A man has digitally recreated himself down to his black suit and sunglasses. One woman has created an almost identical digital copy of herself down to the flowered pattern of her dress. For others, however, there’s a stark contrast. A middle-aged man portrays himself as teenage girl. Another represents himself as a robot. When I initially read the article, I thought about the power of the digital world and how we could craft our online identities. We could choose to be seen as we were or as we hoped to be. The online world could be a powerful equalizing and democratizing arena, allowing new voices to be heard and new people to participate. But I also worried how others interact to these digital representations. Does discrimination translate to a world of avatars and digital identities?
I was reminded of this article last week as I read a new study conducted by Stanford’s Center for Education Policy Analysis. Looking across 124 different online classes, researchers examined the student and instructor responses to discussion board posts based on the gender and race of the student initially posting. To conduct the study, the researchers created eight student profiles with names that were “connotative of a specific race and gender (i.e., White, Black, Chinese and Indian by gender).” In each of the online classes, researchers used each student profile to contribute a single discussion board post and monitored the responses from instructors and other students. Across all of the 992 posts that the researchers contributed (8 posts across 124 courses), instructors responded 7.0% of the time. Examining the instructor responses based on the racial and gender profiles of the students showed that instructors were more likely to respond to the “White male” students than others. Across the 124 classes, instructors responded to “White males” 12% of the time. Instructor responses were far lower for every other gender/race combination. Compared to the other student profiles, White males were 94% more likely to receive an instructor response than other students.
While these findings are troubling, the study also includes some promising signs too. Looking at the student responses, at least one student replied to 69.8% of the researchers’ posts and each post received an average of 3.2 student replies. While white female students were more likely to receive replies from other white female students, no other statistically significant findings could be made. Regardless of the gender and race of the student profile contributing the post, their online peers responded at similar rates.
As an online instructor, the research provides an important lens for me to view my own practice. Am I interacting with students in unbiased manners? Am I responding to my students’ posts in similar fashion? I spent a couple hours a few days ago looking at some recent online classes to see if I could find some trends in how I interacted with students and responded to their posts. Casually looking across the discussion forums, I didn’t see any clear trends but I’ve been devising a few ways to dig a little deeper into the data. Regardless of what I find, this research study has opened my eyes a great deal to the biases that can happen online. And maybe being aware of these biases is the first step to intentionally overcoming them.
(2018). Bias in Online Classes: Evidence from a Field Experiment.