The Online Paradox: My Take

In last week’s post, I shared research that presented a confusing and troubling paradox regarding online learning. Across the research, it was clear that online learning provides some educational benefits to students but also creates some challenges.  Here is a summary (be sure to check out last week’s post for links to all of the original research articles):

  • Online learning can increase access to education for some disadvantaged populations.
  • Online classes have an 11-14% lower retention rate than similar face-to-face classes.
  • Students who take some classes online are more likely to complete the community college degree than students who only complete their classes in traditional, face-to-face formats.
  • Students who took some online and face-to-face classes were more likely to be retained than fully online students.
  • Students who took more than 40% of their classes online were less likely to attain their college credentials.

Did you follow all of that? I know it’s confusing and (at times) somewhat contradictory but I think we can draw some conclusions from these data points. Last week, I asked readers to share their take on the research (check out their comments on the post), but what are mine?  Here are some big takeaways.

  1. Online students need to be self-regulated learners. Looking across the research, it is clear that some students are successful as online learners and others have difficulty. Recalling a study shared in a post from a few years ago, Xu and Jaggars (2014) examined performance and retention in online and face-to-face community college classes across Washington State. Comparing the different populations across delivery types, the researchers found that “older students adapted more readily to online courses than younger students” (pg. 23). In addition to taking classes, older students are also more likely to have to manage work and family responsibilities. These constraints create an impetus for older students to self-regulate their learning and manage classroom expectations. I worry, however, that we use this research to say which students can take online classes and which ones cannot. Instead, I argue that online instructors need to develop additional supports that can help students develop self-regulation strategies. (Can anyone say checklists?)
  2. Building a community is key. I’m fascinated by the “tipping point” data that shows that students taking 40% of their classes online is the upper boundary for successful attainment of their degree. But, why does that happen? I believe it’s about connection to a larger community. Students who take more than 40% of their classes online may feel isolated from the campus community. With so many online classes, they may not have as many peers who can provide emotional and academic support. These students also may not be making connections with faculty. As more programs move online, it’s critical that institutions develop supports to help fully online students engage with peers and with faculty. One strategy could be to create cohorts of students who progress through online programs together. This would help the students develop stronger connections with their peers, and in turn, with the institution itself.
  3. We still have more to learn. While the current research gives some information about the impacts of online learning, it raises additional questions. My hope, however, is that we shift the focus away from examining retention and success data and begin to develop a stronger research base for instructional strategies that positively impact student learning online.
Advertisements

Online Learning: A Paradox

Advocates for online learning (myself included) often cite how distance education can provide better access for students and more flexibility for scheduling. When taking online classes, students can also work at their own pace and on their own time, structuring their learning based on what works for their family and work lives as well as their academic needs.

Those are the benefits afforded by online learning, at least conceptually. But, what does the evidence say? Take the research by Johnson, Meija and Cook (2014). Examining data from community colleges across California, the researchers found that online learning had increased for certain populations (minority students, non-traditional students, etc.). The study also cited job schedules and family commitments as large motivators for students enrolling in online classes. At first glance, it seems that online learning is living up to its promise.

But, let’s dig a little deeper and change the focus a bit. Instead of examining enrollment statistics, let’s take a look at other metrics. Maybe we should be asking how effective is online learning in supporting students’ success both in their online classes and beyond? Here, the results get a little muddy. Returning to the study by Johnson, Meija and Cook, when looking at retention rates in online classes, the researchers found “on average, students in online courses are at least 11 percentage points and as much as 14 percentage points less likely to successfully complete an online course than otherwise similar students in traditional format classes” (p. 9). This seems pretty damning and (maybe a little contradictory). While online learning may be providing greater access, it seems to also have a negative impact on retention rates.

But, wait. The evidence gets a little more confusing if you expand the lens a bit more. Take the work by Shea and Bidjerano (2014). In a national study of community college students, the researchers found that students who took some of their courses online were significantly more likely to attain their community college degree than students who completed only face-to-face classes. This suggests that taking online classes has a positive effect on student success.

Following up on this study, however, James, Swan & Dawson (2016) looked at retention and completion rates for 14 different institutions (both community and four-year colleges) and found that students who took a mixture of online and face-to-face courses had up to 1.6 times greater odds of being retained than fully online students. So, taking some online classes seems to have a positive impact, but taking too many has a negative impact. But, how many is too many?

Recently, Shea and Bijerano (2018) examined this tipping point in more detail. Analyzing their sample of community colleges in New York, they found that “taking a load of approximately 40% of coursework is the upper limit for the beneficial effect of online enrollment on degree completion. Beyond that level, students attain college credentials at lower levels than their classroom-only counterparts” (p. 290).

So, this is usually the point of my post where I contextualize all of this in some digestible way. But, what do you think of this? Clearly, there’s a paradox (or several paradoxes) presented by online learning. Online learning increases access but decreases retention. At smaller rates, taking online classes can improve degree completion. At higher rates, however, taking online classes can decrease completion rates. Seeing all of this competing evidence, what’s your take? Feel free to comment below. I’ll follow up next week with my thoughts.

All That Glitters…

Last week, the Chronicle of Higher Education featured a commentary written by Dr. Jonathan Zimmerman, a professor of history and education at the University of Pennsylvania. In the commentary, Zimmerman discusses Penn’s new online undergraduate degree and how it is being advertised as “an Ivy League education, without an asterisk.” Zimmerman challenges this slogan from two different perspectives. First, he explains that “the slogan communicates the opposite of what it claims.” Online and face-to-face learning environments are inherently different, Zimmerman argues, and saying there isn’t a difference is a false assumption. His second position, however, is the more damning from my point of view. Zimmerman writes:

Surely there will be some difference between online and regular degrees. We just don’t know what it is, and, worst of all, we don’t want to know.

If I gave that quote to ten colleagues, I bet most would assume that the author was criticizing online learning. Instead, Zimmerman argues that most institutions “have refrained from making a rigorous or sophisticated effort to evaluate classroom instruction” in online or face-to-face environments. Sure, we rely on student evaluations as a measure of teacher effectiveness (albeit, an imperfect one) but few institutions conduct regular observations of teaching or try to capture the impacts of instruction in any systematic way. And from that, it’s hard to say how effective online or face-to-face instruction really is, let alone compare them with one another. To me, this is the start of an important conversation we all need to have on our campuses.

I think a lot of educators assume that face-to-face instruction is the “gold standard” of education. But, how do we know? Better yet, is this position based on evidence? I believe most of us have a tendency to hold traditional practices (e.g. face-to-face instruction) with assumed quality but newer practices (e.g. online instruction) with skepticism. Maybe we think back to our own experiences and remember that engaging professor and the riveting lessons they offered. But I would argue that it’s the standard to which we hold ALL face-to-face learning. We think of the best possibilities and believe it is happening in all face-to-face classrooms.

But most of us don’t have significant experiences with online learning. So, instead of remembering that incredible experience, we think of the worst possibilities that can happen. We think of ineffective learning management systems and unmotivated and disengaged students. We think of static, text-based curricular materials and students who are unable (or unwilling) to meet even the most minimal of expectations. These represent some of the worst possibilities in online education and we extrapolate that to ALL online classes.

But that’s not fair. It’s also not very scientific or scholarly. Zimmerman, however, encourages us to take a different tactic. After discussing how all courses should promote critical thinking, problem solving, and application of knowledge to complex scenarios, he writes

I’m troubled that we don’t seem to be applying these same skills, abilities, and capacities to the question of teaching itself. So we really have no idea whether our online degree will have the same quality as our regular courses. We are all flying by the seat of our pants. That’s also why I find the debate about online learning so dissatisfying. One team says it will make things better, and the other says it will make things worse. But to sustain either claim, you have to know what’s happening now — and in most cases, we don’t.

To get a better sense of what’s happening in our classrooms (both face-to-face and online), we need to do a better job of assessing our effectiveness as teachers. And that’s going to involve confronting our assumptions and biases and developing measures that can distinguish the “gold” from the “pyrite.”

Lead with Empathy

I co-facilitated a workshop last week for some faculty at another college. At the session, we discussed Stephen Brookfield’s book The Critically Reflective Teacher (2017) and the different lenses that we can use to examine our effectiveness. At the start of the session, we discussed how we needed to challenge our assumptions about teaching and learning and examine what we believe about our roles as teachers and our students’ roles as learning. My co-presenter and I shared a few stories where we confronted by how our assumptions were inaccurate.

In my story, I explained that when I first started teaching online I held the assumption that many students pursued online classes because they were easy. Even though I now know this assumption to be false, it absolutely colored how I interacted with students in my first online classes. I shared a story about the time I emailed an online student who had not fully participated during the first week of the class. My email was stern and communicated that she was not in compliance with the expectations of the course. If she wanted to pass the class, I wrote, she would get motivated and start doing the assigned work. My email was strongly worded because that’s what I believed was required when working with students who were looking for the easiest route.

Looking back, I have to admit that I’m a little embarrassed by the exchange that followed. The student emailed back to say that she was dealing with a serious family illness and hadn’t checked into the class because she was at the hospital. She was planning to ask for an extension for the first modules but instead decided to drop the class completely. My assumptions lead me to communicate with the student completely differently than what was needed in the situation. While I lead with a stern tone that communicated compliance, what she really needed was some empathy.

I explained this to the workshop attendees and discussed how my false assumptions of online students informed my communication. I also explained that I now “lead with empathy” in my communication with students. I now assume that students in my online and face-to-face classes are motivated to do high quality work. A few years ago, I came across a quote from Indra Nooyi, the chairperson and CEO of Pepsico, that really changed the assumptions I make about my students, and the other people with whom I interact.  Nooyi said:

“Whatever anybody says or does, assume positive intent. You will be amazed at how your whole approach to a person or problem becomes very different. When you assume negative intent, you’re angry. If you take away that anger and assume positive intent, you will be amazed. Your emotional quotient goes up because you are no longer almost random in your response.”

It’s my job to tend to my students’ affective needs as much as their instructional ones and the assumptions I made are critical to that job. I hope readers don’t infer that I’ve lowered expectations for students or that we’re sitting around singing Kumbaya or anything. Instead, I find that “assuming positive intent” and “leading with empathy” gives me a starting point that gives me more options with students. If I determine that a student needs a stern email or a course correction, I can always adjust my communication and approach down the road.

Be Critically Reflective

In preparation for a professional development workshop I’m doing later this week, I’ve reread the book, Becoming a Critically Reflective Teacher by Stephen Brookfield. I remember reading the Brookfield book years ago when I first started teaching. A second edition was (finally) released in 2017 and I’ve been finding so much in the text that still resonates with me after teaching for 20+ years. For instance, early in the new version, Brookfield writes:

“One of the hardest lessons to learn as a teacher is that the sincerity of your actions has little or no correlation with students’ perceptions of your effectiveness. The cultural, psychological, cognitive, and political complexities of learning mean that teaching is never innocent. By that I mean that you never be sure of the effect you’re having on students or the meanings people take from your words and actions. Things are always more complicated than they at first appear.” (p. 2)

That’s powerful stuff. I’ve written about these concepts over the years on this blog, especially with how I’ve transitioned from focusing on my actions as instructors to focusing more on my interactions with my students. But Brookfield adds another aspect to this. Our effectiveness as teachers has little to do with our intentions and sincerity and more to do with how our teaching is perceived by the students we serve. If you read further in the book, however, Brookfield offers additional lenses that are important for assessing our effectiveness as teachers.  I thought I’d spend a little time unpacking all of these lenses with the hope of fostering more “critically reflective” practice and encouraging us to explore additional data to inform our roles as teachers.

  1. Students Eyes: Earlier this year, I advised readers to conduct regular student evaluations so they could monitor fluctuations in students’ perceptions. But student evaluations aren’t the only way to assess students’ perceptions of our teaching. Seeking formative feedback from students during the course of the semester is much more proactive than trying to address evaluations after the semester has ended.  Asking critical questions like “At what moment were you most engaged as a learner?” or “At what moment were you most distanced as a learner?” can provide much more valuable insight into our teaching than asking “Do you have any questions?” or “How is it going?”
  2. Colleagues Perceptions:  Our colleagues can offer a powerful lens for us as teachers. In a way, they can serve as an ethnographer to detail the social and cultural aspects of our classrooms. The challenge, however, is that the real impact of these functions occurs through trust, collegiality and a shared mission for becoming more effective.
  3. Personal Experience: Teaching is a highly personal activity. It is informed by our own experiences as learners and our experiences as instructors. The challenge with this lens, however, is that we assume our experiences are the same as those of our students. While our personal experiences can inform our roles as teachers, we have to be careful that those experiences don’t create blinders to limit our field of vision on our teaching.
  4. Theory and Research: This is the lens that I’ve drawn upon a great deal while writing this blog. Our teaching should be guided by what research and theory says about learning and learners. Whether it’s examining “high impact practices” or ways to support our diverse learners, our actions and instructional decisions should be informed by the growing body of educational research.

I know this is a short overview of Brookfield’s work but my hope is that it gives you some ideas on how to adopt additional lenses to view your teaching. Critically reflective teaching involves having multiple sources of data on our teaching. These complimentary lenses give us a fuller picture of what we’re doing and how effective we are.

Shine a light

If you work in higher education, you’ve likely heard the story of Brian McNaughton, a chemistry professor from Colorado State University. To leverage a raise and increased lab funding, Dr. McNaughton met with his department chair and dean to discuss a job offer he had received from another institution. As it is described in a recent Chronicle of Higher Education story, many professors use job offers from other schools to negotiate better professional arrangements at their current institutions. The challenge with McNaughton’s case was that the job offer wasn’t real.  McNaughton had made it up.

Anxious to keep McNaughton and his research at the institution, however, Colorado State administrators expanded his research budget and gave Dr. McNaughton a raise. From McNaughton’s perspective, the ploy worked. McNaughton had forged a job offer that forced the institution to evaluate how much he was worth to them. And Colorado State officials had responded positively.

McNaughton’s deceit, however, eventually came to light and he is now out of work. In the Chronicle story, the authors detail how a failed marriage and an online social media campaign led to McNaughton’s ouster from Colorado State which ultimately ended his research career.  At the end of the article, the authors write:

Hardly any scientist will ever win a major prize or successfully develop a cancer drug. The odds of that are even more daunting for one who toils away at a midtier public research university. So the focus shifts to smaller wins: a congratulatory email from the dean, a steady stream of pipette tips, a few extra square feet of lab space. Maybe, if everything goes just right, there’s a new interdisciplinary program or an article in a major journal. These tiny battles for resources and validation can consume a professor, but they do little to answer what became for McNaughton an essential question: What am I worth?” (Stripling & Zahneis, 2018)

I’ve been thinking a lot about McNaughton and his search for “smaller wins.” He didn’t know what he was worth, so he sought out ways to measure that. While McNaughton’s method bore poison fruit, I know the ground from which it grew. To be clear, I don’t agree with his unethical decisions, but I totally understand his motivations. Far too many of us work in environments where we plod away without knowing that we’re valued by our colleagues, our students or our administrators. So, instead, we look for ways to find that validation.

So, here’s my charge to you. I’ve heard from quite a few people lately that there are actually people who read this blog. If you’ve made it this far into this post, I want you to take a moment and offer that validation to someone you value. Maybe it’s a colleague that shares a lab with you. Maybe it’s a former mentor who you haven’t spoken to in years. Maybe it’s an administrator who did something helpful and made your job a little easier.

Send them an email. Or better yet, write them a card.

Shine a light on their work and validate what they do. Offer them a small win. Let them know you value them. Let them know how much they’re worth.

 

 

An Inverse Relationship

It’s strange how my brain works sometimes. I’ll read something that I thought would send my brain spinning and it just lands with a dud. Or I’ll listen to an interview that should be really compelling and I won’t even remember it after the interview has ended. Other times, however, someone will say something in casual conversation and I’ll ruminate on it for weeks. This post is about the latter.

In my role on campus, I help to facilitate new faculty orientation. One of my favorite parts of this job is moderating a panel discussion with second year faculty members. I enjoy this event because it’s a great time for me to reconnect with some newer colleagues at the end of their first year on campus and help them reflect on their ups and downs.

During this year’s panel discussion, one of the second year faculty members was reflecting on his first year as a teacher. This faculty member (I’ll call him Sam) was trained to be a researcher and came to our institution without a whole lot of teaching experience. Since our university places a great deal of weight on teaching, Sam has dedicated time over the last year honing his teaching abilities. He regularly attends professional development sessions and participated in the weeklong online teaching training that we offer. Over the course of the last year, Sam has become a really reflective teacher, which I would argue is one of the necessary traits to becoming a great teacher.

Returning to the panel discussion, I asked the group of second year teachers to share something they had learned from their teaching this past year. Sam thought about the question a bit and answered:

I’ve found that there is an inverse relationship between the amount of time it takes me to create an assignment for students and for me to grade an assignment.

Over the last three weeks, I’ve thought about Sam’s response a lot. At first, I really liked Sam’s scientific and mathematical response. With my physics background, I always appreciate when someone who drops “inverse relationship” into casual conversation.

But the main reason Sam’s comment has resonated with me is that his comment recognizes the importance of instructional planning. During a follow-up question, I asked Sam to elaborate on his observation of this “inverse relationship.” Sam explained that when he would collect his quickly created assignments from students he would realize that some students would think he was asking about one thing when he was really asking another. Sam would then have to spend a lot more time deciding how to assess these students’ responses fairly. Spending a little more time creating the assignment, Sam explained, would have saved him time with grading.

It’s clear that even though he’s early in his tenure as a teacher, Sam has learned an important lesson: well-planned activities and assessments don’t just happen. They take time to develop and tweak and revise. In a way, Sam has arrived at the instructional equivalent of an old adage: an ounce of prevention is worth a pound of cure.