Revising the Record on Flipped Learning Incorporated

A few months ago, I published a blog post examining The Flipped Learning Global Initiative (FLGI) and its work in developing training standards for professional development activities related to flipped classrooms. My post was in response to a March 2018 article from Inside Higher Education where FLGI was featured. At the start of my post, I wrote, “I’m probably not going to make many friends with this post.” And while it took a few months, it’s clear that I ruffled some feathers.

Last week, Errol St. Clair Smith posted a comment on my original post. St. Clair Smith is the founder and director of global development of FLGI. You can read his comments here. But I thought I’d dedicate this post to revising the record where it was necessary.

In his comment, St. Clair Smith offers three main areas where he felt I misled my readers. Let me address these in detail.

1. No one has to pay to get clarification about the evolving definition of Flipped Learning. In my original post, I describe looking for the updated and unified definition of flipped learning that was described in the original Inside Higher Education article. When I searched the FLGI website, I found training videos, books and podcasts to purchase but I couldn’t find the unified definition. In fact, that definition is still in development. It will be released in September. Which brings us to St. Clair Smith’s second point.

2.The project is being managed by a nonprofit FLGI is a for-profit entity. While FLGI offers flipped learning trainings and certification, it offers these services at a cost. Another group, The Academy of Active Learning Arts and Sciences, is actually managing the process of developing the unified flipped learning definition. Strangely, the AALAS isn’t mentioned in the original Inside Higher Education article. That might be due, in part, because the AALAS wasn’t created until May 2018. It was created by FLGI and many FLGI members serve on the AALAS leadership team. Besides being the founder and director of global development of FLGI, St. Clair Smith also serves as a member of the AALAS Board of Directors.

3. Finally, it’s always risky to attempt to define the “intentions” of others. This is in reference to the section where I write:

But that’s not the real motivation behind the Flipped Learning Global Initiative. It’s not just about advancing education or improving student learning. By packaging these practices together under a single catchy title, FLGI members have created a marketable brand that can be monetized and sold.

St. Clair Smith is correct. I don’t know the intentions or motivations of FLGI members. I don’t know if FLGI’s motivation is to create a marketable brand that can be monetized and sold. I also don’t know why FLGI chose to create AALAS or why its members lead both groups. As an educator, I don’t really know the nuances of non-profit vs. for-profit organizations or the advantages of having both groups working in concert. I also don’t know how the existence of these complementary organizations serves those entities or the greater educational community. But I have reservations.

But I’ve mentioned these reservations in the past. In my post, The Branded Teacher, I write how relationships with corporations can impact teachers’ judgments and their use of technologies. But I’m not the only one who has these reservations. In that original post, I quoted a Columbia University professor who worried that teachers may be “seduced to make greater use of the technology, given these efforts by tech companies.”

I want to point out one more issue that St. Clair Smith identifies.  He writes, “I’m going to assume that your intentions are honorable, but that your journalism skills are still evolving. As a proponent of truth, transparency, and full disclosure, we trust that you will publish this correction.”

I am a proponent of truth and transparency. In December 2012, I wrote this post that clearly disclosed what influences my work. I fully disclosed that I receive no financial rewards for what I post here or for the links I provide. When I review a technology or a service, I do it because I find it valuable as an educator. I don’t receive any compensation for the products or services I review. I also have not monetized my YouTube channel or earn any money from the advertising that WordPress includes in this free version.

In response to St. Clair Smith’s concerns, however, I’ve moved two older blog posts (Why I Blog and Full Disclosure) to the front page of this blog to make my motivations and relationships clear to ALL readers. In his pursuit of honor, truth, transparency and full disclosure, I wonder whether St. Clair Smith will choose to include this same information on the FLGI and AALAS websites.


Delayed Reactions

The first major study on the harmful impacts of smoking cigarettes was published in the Journal of the American Medical Association. In their report, Dr. E. Cuyler Hammond and Dr. Daniel Horn reported on a 20 month study that followed the death rate of 188,000 male smokers and non-smokers. In their report, Hammond and Horn wrote:

It was found that men with a history of regular cigarette smoking have a considerably higher death rate than men who have never smoked or men who have smoked only cigars or pipes.”

Looking at the data, the researchers concluded, “deaths from cancer were definitely associated with regular cigarette smoking.

Hammond and Horn’s work was published in 1954.

I share this history lesson for a few reasons. First, I enjoy looking back at history and trying to learn from societal reactions to things. I like to place myself in different moments in time and consider how I would have reacted during historical events. Would I have opposed certain political figures? Would I have participated in different demonstrations? Would I have attended some historic event? While I don’t really know how I would react in those situations, I find the process reflective and introspective.

I find this research study on smoking particularly instructive. Despite the scope of the research and definitive nature of the conclusions that the researchers drew, smoking rates continued to increase for the next decade. In 1964, the Center of Disease Control reports that over 51% of adult men and 34% of adult women still smoked. By that point, thousands of additional studies were published that identified the harmful effects of smoking. In 1965, the Surgeon General released its historic report requiring health warnings on cigarettes and banning cigarette advertisements on broadcast media. But some people still continued to smoke.

In my thought experiment, I wonder how I would have reacted in 1954. I imagine that I’m a smoker and I hear Hammond and Horn’s research. Would I ignore the research and continue smoking? Or would I find the findings so compelling that I’d struggle through the difficult process of quitting the habit? Knowing my appreciation for empirical research, I’m certain I’d choose the latter.

I cited early that I had two reasons for sharing Hammond and Horn’s 1954 study. Beyond the reflective process that I undertake to consider my imagined roles in history, I want to take a broader look at the delayed reactions that people can have to research. You’d think that after 60 years of research that shows the negative impact of smoking no one would engage in the practice. According to 2018 statistics from the Center for Disease Control and Prevention, however, 15% of Americans still smoke regularly.

Please understand that I’m not trying to simplify addiction or look down at smokers. Addiction is complex process that can’t be solved with a blog post. That’s not my goal here. Instead, I’m trying to shed a light on research, data and how we react and respond to it. Or how we don’t.

This brings me to an opinion piece that a respected colleague shared through social media recently. Appearing in a 2013 Atlantic article, Abigail Walthausen called for teachers to not “give up the lecture” and “be role models” by standing in front of their classrooms and delivering content. At the time, Walthausen believed there was a growing sentiment against lecturing and she sought to defend the practice. In the article, Walthausen drew on a 2011 Economics of Education Review article that examined data from the 2003 Trends in International Mathematics and Science Study (TIMSS) and showed that students who participated in more lecture-based classrooms outperformed their peers who participated in more problem solving based environments. From Walthausen’s perspective, this research provides strong support for “the ‘sage-on-the-stage’ model of education.”

It’s interesting that Walthausen chooses to bring in empirical research to support her position. Over the last decade, there has been a growing body of research in support of active learning strategies over lecture-based instruction. I’ve written about some of this research on this blog over the years. One compelling meta-study examined 225 different studies on active learning in Science, Technology, Engineering and Mathematics related courses. In that article, the researchers found that students in lecture-based courses were 1.5 times more likely to fail than students in classes that utilized active learning. Across the studies, the average failure rates were 21.8% in classes that employed active learning and 33.8% in traditional lecture classroom environments. How compelling was the analysis to the researchers? Since the researchers were traditionally trained scientists, they communicated their findings in language that other scientists would understand.

If the experiments analyzed here had been conducted as randomized controlled trials of medical interventions, they may have been stopped for benefit—meaning that enrolling patients in the control condition might be discontinued because the treatment being tested was clearly more beneficial.”

One would think that this research would compel teachers to immediately change their practice. But that hasn’t happened in any large scale way. The challenge with this delayed reaction is that, much like the smoking research shared in the 1950’s and 60’s, there is competing information being shared (like Walthausen’s article) that supports that status quo.

And maybe that’s the hardest part about my thought experiment from earlier in this post. It’s easy for me to look back and consider how I’d react because history has shown me the right answers. I know which world leaders I should have supported or opposed. I know which events I should have attended and in which demonstrations I should have participated. Looking back is easy. It’s harder to consider the data at hand and always make the “right” decisions in real time. Instead, some may choose to delay responding and reacting. Or make the wrong decisions.

Freeman, S., Eddy, S. L., McDonough, M., Smith, M. K., Okoroafor, N., Jordt, H., & Wenderoth, M. P. (2014). Active learning increases student performance in science, engineering, and mathematics. Proceedings of the National Academy of Sciences, 111(23), 8410-8415.
Schwerdt, G., & Wuppermann, A. C. (2011). Is traditional teaching really all that bad? A within-student between-subject approach. Economics of Education Review, 30(2), 365-379.
Walthausen, A. (2013). Don’t give up on the lecture. Atlantic Monthly.



Context Matters

If you’re a teacher, I’m sure you’ve had this experience. You teach a lesson with one group of students and it works great. The students are motivated and engaged. The technology works correctly. The activities you planned all make sense to the students and support their development. The students even laugh at your jokes. It’s like the clouds have opened up. The sun is shining down on your classroom and you can hear harps playing. Your lesson nailed it.

But then, you try the same lesson again. Instead of triumphant success, the lesson falls flat. Students get lost in your explanations or the technology isn’t working. Or maybe the students aren’t as motivated to learn or maybe they’re less engaged. For whatever reason, it’s as if a host of thunderclouds descended upon your class and you can hear the sound of a sad trombone playing. The lesson was a bitter failure.

I’m outlining these divergent experiences because they demonstrate the power of context. I’ve been thinking about the power of context a lot recently. Some of this was sparked by a keynote presentation I attended over the weekend. The Teaching Professor conference was held in Atlanta and the opening keynote was Dr. Stephen Chew, a psychology professor from Samford University. If you ever have a chance to see Dr. Chew speak, don’t miss it. He’s easily one of the most entertaining and interesting presenters ever. You will not be disappointed.

In his keynote presentation, Dr. Chew discussed some of the myths and buzzwords that are common in education. He led over a thousand educators through an active (and sometimes dissonant) analysis of different instructional beliefs and practices. As he wrapped up his presentation, Dr. Chew advised the group that cognitive science has taught us a lot about how people learn. But it was up to us as teachers to recognize the correct context with which to apply the concepts. For instance, something that might work in an elementary classroom would never work in a collegiate one. Working with first year students in a general education classroom is different than teaching seniors in an upper level course in their major. To be an effective educator, you need to consider a host of contextual factors as you plan and lead a lesson. In short, context matters.

But this mantra doesn’t just apply to educators working in classrooms. As some readers may know, I lead the faculty development center at my institution. After working in the role for the last six years, I collaborated with a colleague to lead a session for new faculty developers at the Teaching Professor conference. While we provided a lot of different resources and suggestions for the attendees, one of the clear takeaways was that faculty developers needed to understand the context of the institution where they worked. Faculty development strategies that might work at a small liberal arts institution may not work at a large research university. Strategies that are effective at a community college may be less effective at a faith-based institution. Context matters.

One of the challenges I see is how we respond to these contextual factors. I worry that some may take this to mean that anything goes. Since we can’t control the contextual factors and can’t always predict the effectiveness of our work, the logic could go, why even try? We should just plan something, anything, and assign any failure to the context of the environment. But this shifts the locus of control away from our roles as educators.

From my perspective, recognizing the role that context can play makes our knowledge and expertise as educators so much more important. We need to be able to identify different constraints and plan accordingly. And be able to change course when we see a strategy isn’t being effective. While context definitely matters, pedagogical knowledge and experience matter more.

Raising the Floor

I’m teaching an online class with several graduate students enrolled in our online teaching program. In a discussion forum last week, one of the students brought up the use of rubrics. Since many of the students in this class are also practicing teachers in local schools, the rubric comment struck a nerve and sparked a lively discussion with the group. Looking across the comments from the class, it seems there are a lot of strong feelings (positively and negatively) about the use of rubrics.

For those readers who may be unfamiliar with the concept, a rubric is a tool that outlines the criteria for which student work will be assessed. A well-designed rubric provides a uniform standard for educators to evaluate subjective assignments which can make the assessment process easier. When shared with students prior to the start of an activity, a rubric can provide a road map for students so they know which areas of the assignment are the most important. Rubrics also inject transparency in the assessment process, allowing students to know exactly how they’ll be assessed for a given assignment.

While rubrics sound like a critical tool for teaching and learning, I find that few educators enjoy making them or using them. I attribute this to several reasons. First, good rubrics are hard to make. It can be difficult to capture the essence of an assignment in objective and observable terms. It can also be challenging to break up an assignment into specific criteria with clear levels of development and quality. While tools like iRubric and Rubistar can be provide a good starting point, developing a good rubric requires a great deal of thought and energy. I also find that few educators hit the mark with their first version of a rubric. Most rubrics will need to go through multiple revisions before they’re really strong. Some rubrics may never get there.

Beyond the challenging development process, some educators also have reservations with how students respond to the use of rubrics.  I’ll be the first to admit that rubrics can have a normalizing effect on students’ creativity. When the elements of an assignment are detailed clearly and objectively, rubrics have a way of “lowering the ceiling” of student work. When I provide rubrics for an assignment, I find that I get a lot of really good products from students but fewer “out-of-the-box,” “knock my socks off” creations. But I also get fewer poor student creations as well. In a way, rubrics work to “raise the floor” with student submissions. Since students know how they’ll be assessed, they have a better idea of the minimum expectations that will be allowed. Depending on the nature of the class, the assignment or the students, “raising the floor” may be enough of a reason to incorporate rubrics. While I doubt this rationale will make any educator fall in love with the concept of rubrics, it may promote their use.

Checking off Checklists

A few months ago, I posted about how my online students have been requesting that I add checklists to allow them to self-monitor their progress. It was on my list of things to do to improve my classes and I’m happy to report that I was able to incorporate checklists in my online classes that started a few weeks ago. Before we get to how they’ve been used, let’s review.

Checklists help students to be more metacognitive and to self-regulate their learning. Well-defined checklists can make expectations clear for students and help them monitor their progress in completing the expectations. When completing complex assignments, checklists can help students better understand the individual tasks embedded within the complexity. This is especially helpful in my online classes. While I like to think I’ve organized my classes pretty linearly, there are lots of moving parts each week. Checklists can reduce this chaos for students and help them focus on the specific aspects they need to complete.

Besides the direct connections to learning, checklists are also one of the ways to incorporate Universal Design for Learning (UDL) in your classes. One of the principles of UDL is “providing multiple means of action and expression.” This broad principle can be more easily understood when the supporting guidelines are considered. Checklists fall under the guideline for executive functioning and would help students “develop and act on plans to make the most out of learning” (CAST, 2018). Digging deeper into UDL, checklists help students set appropriate goals, strategically plan their work, manage course information and resources, and monitor their own progress. While checklists may seem like a simple strategy, they can have a huge impact on student learning.

As we enter the third week of my two online classes, I wanted to take a look to see whether students were using the checklists and whether there were any correlations with students’ academic performance. For each module overview, I included a checklist which I listed as a “self-assessment” and explained that students could us it to monitor their progress.  I also explained that using the checklists was completely optional but I stressed that students should use them to “stay on track” with course expectations.

Across the 35 students currently enrolled in my two online classes, 28 have consistently used the checklists for the first three modules. Only two students have chosen not to use the checklists at all. Looking at the performance of the students in the classes, the seven students who are either not using the checklists or using them inconsistently are on average performing 6-7% below the average in their classes. Definitely some interesting findings.

Before any reader gets too excited about the amazing powers of checklists, I think some restraint may be warranted. First off, this isn’t anywhere close to a well-designed research study. I basically looked into the statistics and saw that some students were using the checklists and others were not. The ones who were using them were doing well for the most part. The ones who were not using them weren’t doing as well. Just an anecdotal observation.

Expanding the lens, however, may allow for other observations. Overall, the students who were using the checklists were also the ones who logged in more often, read more of the posts from their peers and accessed course content more regularly. While I was hoping the checklists would be a way to support struggling students, it looks as if the highly motivated, Type A students were the ones who were actually using them. At least so far. I’ll revisit the data after the courses have ended and report back.

My Summer Reading List

This is always one of the more popular posts that I write each year. Each May, I share the academic books that I’m planning to tackle this summer in preparation for the next academic year. If you’re interested what I’ve read in past summers, definitely check out the links at the bottom of this post.

1. Mindful Tech: How to Bring Balance to our Digital Lives (Levy, 2017)
In full disclosure, I’ve actually already started reading this book. After seeing this text referenced at a few conference sessions, I felt like it was time to check it out. In the book, Levy, a professor at the Information School of the University of Washington, discusses ways to limit technology use and more mindfully engage with our devices. Part of my motivation to read this text is personal. I need to bring a little balance to my digital choices and step away from my smartphone more regularly. I’m also interested in how I can support more “mindful tech” use with my students and my children.

2. The New Education: How to Revolutionize the University to Prepare Student for a World in Flux (Davidson, 2017)
Cathy Davidson is one of those educational thinkers who is constantly promoting innovation in schools.  She follows this trajectory in this book by discussing how our current structure of higher education doesn’t prepare students for the new, information age economy. Davidson also offers suggestions for restructuring colleges and universities to better prepare students.

3. Algorithms of Oppression: How Search Engines Reinforce Racism (Noble, 2018)
A couple of colleagues are planning to organize a Faculty Learning Community (FLC) with this text in the Fall. Over the last few semesters, our university has offered several FLCs focusing on race-related topics. Last fall, we offered an FLC on Raising Race Questions (Michael, 2014) and this spring, we offered another FLC on Stamped from the Beginning: The Definitive History of Racist Ideas in America (Kendi, 2017). Both were tremendous successes and provided springboards for difficult conversations. I’m hoping to use this summer to get a jump start on the FLC.

4. iGen: Why Today’s Super-Connected Kids Are Growing Up Less Rebellious, Less Happy – And Completely Unprepared for Adulthood-And What That Means for the Rest of Us (Twenge, 2017)
I have to admit that I’m usually skeptical of generational research that uses survey data to make broad generalizations about populations of people. Twenge, a psychology professor at San Diego State University, has made a career doing this kind of work. I purchased this book after a colleague gave a presentation on campus recently and I’m looking forward to interrogating the ideas that Twenge presents.

5. Daring Greatly: How the Courage to Be Vulnerable Transforms the Way We Live, Love, Parent, and Lead (Brown, 2015)
This is another colleague recommendation. I’ve written about Brene Brown a bunch of years ago after seeing one of her TED videos. I’m looking forward to reading her book this summer and preparing myself to “strive valiantly and dare greatly.”

Summer Reading List 2017
Summer Reading List 2016
Summer Reading List 2015
Summer Reading List 2014

Perceptions and Reality

Across the years, I’ve written about the value of active learning numerous times. In 2014, I wrote about a comprehensive meta-analysis on STEM-related college classes. The study compiled data from 225 different studies on active learning in Science, Technology, Engineering and Mathematics related courses and found that students in lecture-based courses were 1.5 times more likely to fail than students in classes that utilized active learning. Across the studies, the average failure rates were 21.8% in classes that employed active learning and 33.8% in traditional lecture classroom environments. Based on the reported participation numbers across the studies, the researchers estimated “there would be over $3,500,000 in saved tuition dollars for the study population, had all students been exposed to active learning.”

I’m returning to this 2014 post and research because of a recent study that was published in Science (and reported on the Faculty Focus blog). Described as the “largest-ever observational study of undergraduate STEM education,” the study monitored almost 550 faculty teaching 700 courses at 25 colleges and universities in the United States and Canada. The results were pretty alarming. 55% of the STEM classroom interactions involved lecture-based instruction. Faculty Focus interviewed one of the researchers, Marilyne Stains for the University of Nebraska-Lincoln, and discussed some of the findings. In the post, Stains discussed how their research used direct observation over self-reported surveys.

“Surveys and self-reports are useful to get people’s perceptions of what they are doing,” Stains said. “If you ask me about how I teach, I might tell you, ‘I spend 50 percent of my class having students talk to each other.’ But when you actually come to my class and observe, you may find that it’s more like 30 percent. Our perception is not always accurate.”

And that’s where the study and the Faculty Focus article offer some assistance. In their research, Stains and her colleagues used a tool called COPUS (Classroom Observation Protocol for Undergraduate STEM) to conduct their observations. The tool was funded by the National Science Foundation and is available for free online so instructors can study their own instructional practices. There are even instructions for collecting data and a video to improve inter-rater reliability.  A motivated STEM instructor could have a colleague or two observe their classroom and better identify how their classes are actually taught. In the study, the researchers suggest conducting at least four observations to provide “a reliable characterization of instructional practices.”

Another interesting finding from the study was that despite faculty identifying classroom layout and class size as being barriers to implementing active learning strategies, “flexible classroom layouts and small course sizes do not necessarily lead to an increase in student-centered practices.” Looking at the data, regardless of the classroom physical layout, didactic instructional strategies were employed in most of the observed lessons. Considering the overwhelming research on the academic benefits for active learning, I find this shocking. But so do the researchers. At the end of the article, they call for institutions to challenge “the status quo” and to revise “their tenure, promotion, and merit-recognition policies to incentivize and reward implementation of evidence-based instructional practices.”  And that’s a great starting point but I wonder whether it’s enough.

I’m reminded of another blog post I shared in 2016 where I discussed “alternative frameworks” and their impact on people’s beliefs and actions. In science, these alternative frameworks impact how we teach different concepts. For instance, I can tell students thousands of times that gravity acts on heavy and light object the same way and that they fall (and accelerate) at the same rates when air resistance is disregarded. But their alternative frameworks get in the way. Their lived experiences have taught them differently and me telling them doesn’t change their perceptions.

In a way, that’s what has happened with the active learning research. Despite hearing about the benefits of active learning, teachers perceive that lecture works better and me them won’t change their teaching. Using promotion, tenure and merit-recognition systems to force teachers to employ student-centered teaching may change their actions but won’t change their perceptions of how students learn. Maybe the COPUS system could be used to support a Scholarship of Teaching and Learning study so faculty and departments can research how using active learning strategies impact student performance. It’s a little harder than just telling (or forcing) people to change their practice but, in the long run, it may confront both the perceptions and the reality of their work.