An Online Potluck Dinner

Author’s note: I’m taking a few weeks off to do some traveling with my family. In my absence, I’m going to run a series of older posts that I’ve written on online teaching.  This week’s post originally appeared in February 2017.  Enjoy!

Online learning environments can be pretty confusing places to new online students and educators. To help reduce this confusion, I like to use metaphors to describe the functions, activities and components of teaching and learning online.  For instance, when I lead professional development sessions, I’m often asked about the planning process for creating and teaching a new online class.  While the process involves the traditional phases that are captured in most instructional design models, I find it’s better to describe my online course development as party planning.  When someone hosts a party, they have to consider how people are going to interact, what types of music they’re going to listen to, what they’re going to eat and so on.  Good hosts do a lot of this planning before a single person arrives.  This allows the host to attend to the needs of their guests and to enjoy the party themselves.

While I understand this is a simplistic metaphor, I find it best captures my role as an instructional designer and as an online teacher. I’ll spend weeks developing a class, selecting content and planning interactions and assessments so that I can focus on the day-to-day business of meeting students’ needs and fostering engagement once the class starts.  I plan my “online party” before the class begins so I can be a better host once the party starts.

I’m teaching two online classes this semester and they are starkly different.  I’ve taught both classes several times and the classes are usually quite interactive, especially in the discussion forums.  It’s my stated goal in both classes that I’m attempting to foster a larger learning community where ideas and resources are exchanged and critiqued.  In one class, the students are sharing links to websites, uploading articles they’ve found online, embedding videos from different sources and really taking the discussions in new directions.  The other class, however, isn’t as active or as collaborative.  Students contribute posts and respond to each other but there doesn’t seem to be any real online learning community being formed.

As I’ve been thinking about the differences, I wondered whether the students had a clear understanding of what online discussion should look like.  We’ve all participated in face-to-face classroom discussions but a discussion forum is something entirely different.  In a face-to-face class, we’d never expect everyone to answer a prompt and then to respond the posts from two peers.  Yet, those expectations permeate online discussion forums.  Although they are used in many online classes, these expectations alone will reduce discussions to “bean counting” and won’t necessarily promote the type of engagement and exchange of ideas that I’m trying to foster.

Maybe a better metaphor is needed for online discussions.  To carry on with the party theme, I offer the “pot luck dinner” as a means of describing the rich and thoughtful discussions that I’m trying to build.  The “pot luck dinner” is a communal experience where everyone brings a dish to share.  The host usually offers a main course and asks the attendees to bring complementary items.  One person may bring a salad.  Another might bring a dessert.  Someone else may bring beverages.  With everyone contributing to the party, the overall meal becomes more complex and appetizing.  And people always leave satiated.

That’s what I’m trying to promote when I “host” a discussion forum.  I’m not interested in my students just submitting a requisite numbers of posts.  I want them to feed the group.  I want them to bring in complementary content and make the discussions more complex and appetizing for all of us.  While I’m contributing the “main course,” I’m hoping that the class will bring in resources and ideas to extend the meal.  Through this “potluck” experience, we’re all satiated.


Impacts of Open Educational Resources

Regular readers of this blog know that I’ve been trumpeting the use of Open Educational Resources (OERs) for years. For the most part, I promote OERs because they’re free for students and faculty to use. I work at a public institution that prides itself on being a pathway for first-generation college students and underserved students to get their undergraduate degree. Some of these students may have financial constraints that limit their ability to buy textbooks or other curricular materials that they need to be successful academically. When faculty adopt OERs in their classroom, they can provide some financial relief for these students.

On campus, I’ve been working with a group of faculty who are trying to raise awareness of OERs and promote more widespread use of OERs. Our efforts have mostly focused on the financial benefits of using OERs and how these can help students. For the most part, our efforts have not made much impact. Some of our faculty colleagues see OERs as being lower quality than the materials available from a publisher and worry of the academic impacts these may have on students in their classes. I try to explain that requiring a high quality, $200 book (or more) only benefits those students who can afford to purchase it. The others are probably trying to manage without it.

But I think I now have a new argument to make. In a recent issue of International Journal of Teaching and Learning in Higher Education, Colvard and Watson completed a large-scale study of 21,822 students enrolled in eight different courses over 13 semesters at the University of Georgia. In these classes, instructors chose to use OERs during some semester and non-OERs in others. The researchers examined student performance in the courses that used OERs and compared them to student performance in courses that used more traditional materials. Across the boards, students performed better in courses that used OERs. The researchers also disaggregated the data to examine how sub-groups of students performed in the classes. Summarizing their findings, Colvard and Watson write:

OER improve end-of-course grades and decrease DFW (D, F, and Withdrawal letter grades) rates for all students. They also improve course grades at greater rates and decrease DFW rates at greater rates for Pell recipient students, part-time students, and populations historically underserved by higher education(pg. 262).

The conclusion of the paper, however, is the critical part that I plan to share with my colleagues.

“This research suggests OER is an equity strategy for higher education: providing all students with access to course materials on the first day of class serves to level the academic playing field in course settings. While additional disaggregated research is needed in a variety of postsecondary contexts such as community college, HBCU, and other higher education settings to increase the generalizability of this notion, this study provides an empirical foundation on which to begin to change the advocacy narrative supporting OER. A new opportunity appears to be present for institutions in higher education to consider how to leverage OER to address completion, quality, and affordability challenges, especially those institutions that have higher percentages of Pell eligible, underserved, and/or part-time students than the institution presented in this study” (pg. 273).

I’m fortunate to work at an institution where the vast majority of my colleagues are motivated to “do right” by their students. This research clearly shows that using OERs can benefit students, not only financially but academically as well.

Colvard, N.B & Watston, C.E. (2018) The impact of open educational resources on various student success metrics. International Journal of Teaching and Learning in Higher Education, 30(2), 262-275.

A Litmus Test?

The Annual ISTE Conference and Expo was held recently in Chicago. For those of you who may not know, ISTE is the International Society for Technology in Education, which describes itself as a “home to a passionate community of global educators who believe in the power of technology to transform teaching and learning, accelerate innovation and solve tough problems in education.” Over the last twenty-five years, ISTE has been one of the largest advocates for integrating technology into learning environments and been an active change agent for promoting student-centered practices. Their technology standards has helped to drive how technology is used by students, teachers and school leaders and they literally published the book on flipped classrooms. ISTE is a big deal.

If you haven’t attended an ISTE Conference before, it’s something to behold. Over 15,000 educators, librarians, technology coordinators, administrators and thought leaders gather together to share ideas and learn from one another. With thousands of sessions and workshops to attend, it’s professional development heaven for educators who want to learn how to leverage technologies to support student learning. Besides the formal professional development opportunities, the ISTE conference also has a vendor area where companies can exhibit their products. For those educators looking to purchase new technologies, equipment or curricular materials, the vendor area is a one-stop location for exploring all that the educational technology community has to offer.

Due to some family commitments, I wasn’t able to attend this year’s ISTE conference. To stay on top of the happenings at the conference, however, I regularly tracked the #ISTE18 hashtag on Twitter to see what others were posting. In these posts, I found one written by Will Richardson (@willrich45) that really resonated with me.  On June 23, Richardson wrote:

Won’t be at #iste18, but here’s my annual request to attendees: When you’re on the vendor floor, ask all those reps to do one simple thing: Define learning. Their response will tell you if it’s #learntech instead of #edtech. Have fun!

If you haven’t heard of him, Will Richardson is an educational leader who has written a bunch of great books including Why School, Learning on the Blog and Personal Learning Networks. He’s a smart guy and a really engaging presenter. In this Twitter post, Richardson also provides a great “litmus test” for those of us working at the crossroads of technology and education.

Define learning.

While it sounds like an easy task, it can be a little challenging if one hasn’t thought about it in advance. I’m sure some experienced educators would stumble through their first pass at defining learning. They’d probably discuss processes of learning (instruction, inquiry, etc.) rather than the outcomes of learning. Using the prompt as Richardson suggests can help to discern those vendors who focus more heavily on technology rather than on student learning. Navigating the landscape of technology available to educators can be challenging and troubling. For every tool that supports student-centered learning, there is another app or device that doesn’t. Richardson’s “litmus test” can help to differentiate these.

But maybe the “litmus test” can be rewritten to make it a little more introspective for us as educators. Rather than asking technology vendors to define learning, maybe the better question to ask ourselves is “What do the technologies and techniques we use say about how we define learning?” Rather than act as a “litmus test,” this question can help us to see our technologies and instructional practices for what they are: Rorschach tests that reveal to us how we view learning, not in theory but in practice.

Debugging Failure

In a conversation recently, a colleague of mine shared how her family approaches failure. She explained that whenever a family member isn’t successful, she encourages them to examine the situation and look for things that they could have done differently. She asks them to consider which choices would they have changed and which actions could they have improved. My colleague’s background is in computer science and she says that her family calls this “debugging failure.” Rather than just focus on the emotional aspects of failure, she encourages her family to examine the situation as if it were a program that didn’t perform correctly. Like a programmer who examines the lines of code to identify errors that they can correct, “debugging failure” focuses on reflecting on a situation, identifying areas to improve and learning from the process.

I’ve never heard the term “debugging” applied to people’s lives before and the phrase “debugging failure” really resonated with me. Sure, it’s kind of technical but I like the way it focuses on active improvement. Debugging doesn’t just involve identifying errors but correcting them, too. When programmers debug software, they seek to make changes so the program can run more effectively. And that’s the spirit behind “debugging failure.” It says “Sure, something didn’t go right but how can you do better next time?”

Beyond fostering a mindset to approach life’s misfortunes, the “debugging failure” concept could also apply to our experiences as teachers and the reflective practices in which we engage. While I doubt few of us would admit to absolute failure when teaching a lesson or planning an activity, we definitely hit bumps in the road. We may reflect on these missteps, but do we “debug” them? That’s probably the part of the “debugging failure” concept that resonated the most with me. It’s not enough to identify the errors but we have to fix them, too. When we reflect on those instructional missteps (failures?), we need to identify those “lines of code” that didn’t work and figure out ways to rewrite them for the next time. I’ve probably taken this metaphor a little too far, but I think we could all learn to approach failure a little differently. I think it’s inspiring that computer scientists see debugging as a natural part of their workflow.  Shouldn’t we all?

Revising the Record on Flipped Learning Incorporated

A few months ago, I published a blog post examining The Flipped Learning Global Initiative (FLGI) and its work in developing training standards for professional development activities related to flipped classrooms. My post was in response to a March 2018 article from Inside Higher Education where FLGI was featured. At the start of my post, I wrote, “I’m probably not going to make many friends with this post.” And while it took a few months, it’s clear that I ruffled some feathers.

Last week, Errol St. Clair Smith posted a comment on my original post. St. Clair Smith is the founder and director of global development of FLGI. You can read his comments here. But I thought I’d dedicate this post to revising the record where it was necessary.

In his comment, St. Clair Smith offers three main areas where he felt I misled my readers. Let me address these in detail.

1. No one has to pay to get clarification about the evolving definition of Flipped Learning. In my original post, I describe looking for the updated and unified definition of flipped learning that was described in the original Inside Higher Education article. When I searched the FLGI website, I found training videos, books and podcasts to purchase but I couldn’t find the unified definition. In fact, that definition is still in development. It will be released in September. Which brings us to St. Clair Smith’s second point.

2.The project is being managed by a nonprofit FLGI is a for-profit entity. While FLGI offers flipped learning trainings and certification, it offers these services at a cost. Another group, The Academy of Active Learning Arts and Sciences, is actually managing the process of developing the unified flipped learning definition. Strangely, the AALAS isn’t mentioned in the original Inside Higher Education article. That might be due, in part, because the AALAS wasn’t created until May 2018. It was created by FLGI and many FLGI members serve on the AALAS leadership team. Besides being the founder and director of global development of FLGI, St. Clair Smith also serves as a member of the AALAS Board of Directors.

3. Finally, it’s always risky to attempt to define the “intentions” of others. This is in reference to the section where I write:

But that’s not the real motivation behind the Flipped Learning Global Initiative. It’s not just about advancing education or improving student learning. By packaging these practices together under a single catchy title, FLGI members have created a marketable brand that can be monetized and sold.

St. Clair Smith is correct. I don’t know the intentions or motivations of FLGI members. I don’t know if FLGI’s motivation is to create a marketable brand that can be monetized and sold. I also don’t know why FLGI chose to create AALAS or why its members lead both groups. As an educator, I don’t really know the nuances of non-profit vs. for-profit organizations or the advantages of having both groups working in concert. I also don’t know how the existence of these complementary organizations serves those entities or the greater educational community. But I have reservations.

But I’ve mentioned these reservations in the past. In my post, The Branded Teacher, I write how relationships with corporations can impact teachers’ judgments and their use of technologies. But I’m not the only one who has these reservations. In that original post, I quoted a Columbia University professor who worried that teachers may be “seduced to make greater use of the technology, given these efforts by tech companies.”

I want to point out one more issue that St. Clair Smith identifies.  He writes, “I’m going to assume that your intentions are honorable, but that your journalism skills are still evolving. As a proponent of truth, transparency, and full disclosure, we trust that you will publish this correction.”

I am a proponent of truth and transparency. In December 2012, I wrote this post that clearly disclosed what influences my work. I fully disclosed that I receive no financial rewards for what I post here or for the links I provide. When I review a technology or a service, I do it because I find it valuable as an educator. I don’t receive any compensation for the products or services I review. I also have not monetized my YouTube channel or earn any money from the advertising that WordPress includes in this free version.

In response to St. Clair Smith’s concerns, however, I’ve moved two older blog posts (Why I Blog and Full Disclosure) to the front page of this blog to make my motivations and relationships clear to ALL readers. In his pursuit of honor, truth, transparency and full disclosure, I wonder whether St. Clair Smith will choose to include this same information on the FLGI and AALAS websites.

Delayed Reactions

The first major study on the harmful impacts of smoking cigarettes was published in the Journal of the American Medical Association. In their report, Dr. E. Cuyler Hammond and Dr. Daniel Horn reported on a 20 month study that followed the death rate of 188,000 male smokers and non-smokers. In their report, Hammond and Horn wrote:

It was found that men with a history of regular cigarette smoking have a considerably higher death rate than men who have never smoked or men who have smoked only cigars or pipes.”

Looking at the data, the researchers concluded, “deaths from cancer were definitely associated with regular cigarette smoking.

Hammond and Horn’s work was published in 1954.

I share this history lesson for a few reasons. First, I enjoy looking back at history and trying to learn from societal reactions to things. I like to place myself in different moments in time and consider how I would have reacted during historical events. Would I have opposed certain political figures? Would I have participated in different demonstrations? Would I have attended some historic event? While I don’t really know how I would react in those situations, I find the process reflective and introspective.

I find this research study on smoking particularly instructive. Despite the scope of the research and definitive nature of the conclusions that the researchers drew, smoking rates continued to increase for the next decade. In 1964, the Center of Disease Control reports that over 51% of adult men and 34% of adult women still smoked. By that point, thousands of additional studies were published that identified the harmful effects of smoking. In 1965, the Surgeon General released its historic report requiring health warnings on cigarettes and banning cigarette advertisements on broadcast media. But some people still continued to smoke.

In my thought experiment, I wonder how I would have reacted in 1954. I imagine that I’m a smoker and I hear Hammond and Horn’s research. Would I ignore the research and continue smoking? Or would I find the findings so compelling that I’d struggle through the difficult process of quitting the habit? Knowing my appreciation for empirical research, I’m certain I’d choose the latter.

I cited early that I had two reasons for sharing Hammond and Horn’s 1954 study. Beyond the reflective process that I undertake to consider my imagined roles in history, I want to take a broader look at the delayed reactions that people can have to research. You’d think that after 60 years of research that shows the negative impact of smoking no one would engage in the practice. According to 2018 statistics from the Center for Disease Control and Prevention, however, 15% of Americans still smoke regularly.

Please understand that I’m not trying to simplify addiction or look down at smokers. Addiction is complex process that can’t be solved with a blog post. That’s not my goal here. Instead, I’m trying to shed a light on research, data and how we react and respond to it. Or how we don’t.

This brings me to an opinion piece that a respected colleague shared through social media recently. Appearing in a 2013 Atlantic article, Abigail Walthausen called for teachers to not “give up the lecture” and “be role models” by standing in front of their classrooms and delivering content. At the time, Walthausen believed there was a growing sentiment against lecturing and she sought to defend the practice. In the article, Walthausen drew on a 2011 Economics of Education Review article that examined data from the 2003 Trends in International Mathematics and Science Study (TIMSS) and showed that students who participated in more lecture-based classrooms outperformed their peers who participated in more problem solving based environments. From Walthausen’s perspective, this research provides strong support for “the ‘sage-on-the-stage’ model of education.”

It’s interesting that Walthausen chooses to bring in empirical research to support her position. Over the last decade, there has been a growing body of research in support of active learning strategies over lecture-based instruction. I’ve written about some of this research on this blog over the years. One compelling meta-study examined 225 different studies on active learning in Science, Technology, Engineering and Mathematics related courses. In that article, the researchers found that students in lecture-based courses were 1.5 times more likely to fail than students in classes that utilized active learning. Across the studies, the average failure rates were 21.8% in classes that employed active learning and 33.8% in traditional lecture classroom environments. How compelling was the analysis to the researchers? Since the researchers were traditionally trained scientists, they communicated their findings in language that other scientists would understand.

If the experiments analyzed here had been conducted as randomized controlled trials of medical interventions, they may have been stopped for benefit—meaning that enrolling patients in the control condition might be discontinued because the treatment being tested was clearly more beneficial.”

One would think that this research would compel teachers to immediately change their practice. But that hasn’t happened in any large scale way. The challenge with this delayed reaction is that, much like the smoking research shared in the 1950’s and 60’s, there is competing information being shared (like Walthausen’s article) that supports that status quo.

And maybe that’s the hardest part about my thought experiment from earlier in this post. It’s easy for me to look back and consider how I’d react because history has shown me the right answers. I know which world leaders I should have supported or opposed. I know which events I should have attended and in which demonstrations I should have participated. Looking back is easy. It’s harder to consider the data at hand and always make the “right” decisions in real time. Instead, some may choose to delay responding and reacting. Or make the wrong decisions.

Freeman, S., Eddy, S. L., McDonough, M., Smith, M. K., Okoroafor, N., Jordt, H., & Wenderoth, M. P. (2014). Active learning increases student performance in science, engineering, and mathematics. Proceedings of the National Academy of Sciences, 111(23), 8410-8415.
Schwerdt, G., & Wuppermann, A. C. (2011). Is traditional teaching really all that bad? A within-student between-subject approach. Economics of Education Review, 30(2), 365-379.
Walthausen, A. (2013). Don’t give up on the lecture. Atlantic Monthly.



Context Matters

If you’re a teacher, I’m sure you’ve had this experience. You teach a lesson with one group of students and it works great. The students are motivated and engaged. The technology works correctly. The activities you planned all make sense to the students and support their development. The students even laugh at your jokes. It’s like the clouds have opened up. The sun is shining down on your classroom and you can hear harps playing. Your lesson nailed it.

But then, you try the same lesson again. Instead of triumphant success, the lesson falls flat. Students get lost in your explanations or the technology isn’t working. Or maybe the students aren’t as motivated to learn or maybe they’re less engaged. For whatever reason, it’s as if a host of thunderclouds descended upon your class and you can hear the sound of a sad trombone playing. The lesson was a bitter failure.

I’m outlining these divergent experiences because they demonstrate the power of context. I’ve been thinking about the power of context a lot recently. Some of this was sparked by a keynote presentation I attended over the weekend. The Teaching Professor conference was held in Atlanta and the opening keynote was Dr. Stephen Chew, a psychology professor from Samford University. If you ever have a chance to see Dr. Chew speak, don’t miss it. He’s easily one of the most entertaining and interesting presenters ever. You will not be disappointed.

In his keynote presentation, Dr. Chew discussed some of the myths and buzzwords that are common in education. He led over a thousand educators through an active (and sometimes dissonant) analysis of different instructional beliefs and practices. As he wrapped up his presentation, Dr. Chew advised the group that cognitive science has taught us a lot about how people learn. But it was up to us as teachers to recognize the correct context with which to apply the concepts. For instance, something that might work in an elementary classroom would never work in a collegiate one. Working with first year students in a general education classroom is different than teaching seniors in an upper level course in their major. To be an effective educator, you need to consider a host of contextual factors as you plan and lead a lesson. In short, context matters.

But this mantra doesn’t just apply to educators working in classrooms. As some readers may know, I lead the faculty development center at my institution. After working in the role for the last six years, I collaborated with a colleague to lead a session for new faculty developers at the Teaching Professor conference. While we provided a lot of different resources and suggestions for the attendees, one of the clear takeaways was that faculty developers needed to understand the context of the institution where they worked. Faculty development strategies that might work at a small liberal arts institution may not work at a large research university. Strategies that are effective at a community college may be less effective at a faith-based institution. Context matters.

One of the challenges I see is how we respond to these contextual factors. I worry that some may take this to mean that anything goes. Since we can’t control the contextual factors and can’t always predict the effectiveness of our work, the logic could go, why even try? We should just plan something, anything, and assign any failure to the context of the environment. But this shifts the locus of control away from our roles as educators.

From my perspective, recognizing the role that context can play makes our knowledge and expertise as educators so much more important. We need to be able to identify different constraints and plan accordingly. And be able to change course when we see a strategy isn’t being effective. While context definitely matters, pedagogical knowledge and experience matter more.