Zipper Merges and Evidence-Based Practice

I drive a lot.

I’ve mentioned this in previous blog posts but I feel the need to mention it again. I drive a lot.

Driving a lot means that I get to experience certain driver behaviors more regularly than many others. Almost daily, I’ll encounter the people who drive in the left lane without passing. I’ll see the people who drive for miles and miles with a blinking turn signal. I’ll also see those people who weave in and out of traffic and try to gain whatever minuscule advantage they can.

Across all of the different driver behaviors, there is one that I have taken on as my own personal mission: the zipper merge. If you’re not familiar with the term, the zipper merge relates to lane closings on a highway. You probably know the situation. You’re driving along and you see a sign that says, “Right lane closed in two miles.” This usually starts the countdown. Another mile down the road, you’ll see another sign that says, “Right lane closed in one mile.” In case you missed the previous two signs, you’ll be alerted again a half-mile, quarter mile and 1000 feet before the lane closure. For obvious safety reasons, departments of transportation are excessively vigilant about communicating pending lane closures.

The challenge, however, is what you as the driver do as you encounter the numerous signs. For many drivers, almost inexplicably, they choose to merge when they see the first sign. Maybe they don’t want to be rude and wait until the last minute to merge.  Or maybe they were taught to merge when they see the first sign. Or maybe they just follow the actions of the majority of the other drivers on the road. Regardless of their rationale, they merge early, usually a mile or more before the lane closure.

But, here’s the bigger challenge. Research shows that traffic will move more quickly if drivers wait until the merge point. Many departments of transportation officially recommend the “zipper merge.” In practice, this means drivers should wait until the lane closure and take turns merging. The name actually comes from how the merging traffic would look like the teeth of a zipper coming together if viewed from above. In a study conducted by the Minnesota Department of Transportation, they found that the practice reduces the overall length of a backup by 40-50%.  Research also suggests the practice makes merging safer for everyone.

I’ve been advocate of the zipper merge for a while. For some reason, however, the practice is viewed as rude, aggressive and ineffective. Despite numerous states advocating for the zipper merge as a driving practice, many people still merge early and cast an angry eye at those of us who wait until the lane closure to merge. I’ve gotten in my fair share of social media debates with friends and colleagues who look down on us zipper mergers. I’ve also received quite a few offensive hand gestures, honks and curses from neighboring drivers who chose to merge early.

This brings me to the larger point of this post. The zipper merge is but one example of how data and evidence don’t always align with practice. Despite research showing it to be safer and more efficient, the zipper merge is not widely used. But that’s the same result for a host of other evidence-based practices. As humans, we don’t always do the “right” thing.

Bringing this back to teaching and learning, we have to be sure that the practices we choose align with data and evidence. Give prompt feedback for growth. Incorporate active learning instead of lecturing. Provide explicit learning objectives and outline clear expectations. While I don’t know if any of these rise to the cultural dissonance created by the “zipper merge,” there’s a lot of evidence demonstrating the efficacy of these teaching practices. We just need to get instructors to merge into the “right” lane.


Humanity and UDL

In my role as the director of my campus’s teaching and learning center, I asked a colleague to lead a workshop on Universal Design for Learning last week. Around fifty college instructors, administrators and staff attended the workshop, which should help to provide a strong tailwind for our institution to re-examine our curriculum and the instructional approaches we use with our students.

While I’ve written about UDL principles before, I still learned so much through the session. While the workshop facilitator was great, it was also interesting to hear my colleagues’ reactions to UDL. Midway through the first hour of the workshop, another faculty member asked, “aren’t you really just asking us to embrace our humanity?” It was a surprising comment but it definitely gets at the heart of what UDL promotes.

In a lot of ways, UDL isn’t about design at all. It’s about planning for the human experience and providing opportunities for our students to navigate their learning journeys as individuals. Sure, as educators, we have to provide the instructional materials for our students to learn. We need to mindful, however, that there are multiple pathways for students to engage with content and demonstrate their learning. While this type of instruction may seem foreign to some, it’s definitely in line with the human experience.

Walk around any college campus on a sunny day and you’ll see numerous modes of transportation being used. You’ll see people walking and jogging. You’ll see a few students hectically sprinting to class while others leisurely walk hand-in-hand with a loved one. You’ll also see students and faculty using bicycles, skateboards and wheelchairs. Each of these modes of transportation help the individual more easily navigate the college grounds and get from place to place. The great part is that the sidewalks are there to provide common lanes despite the variety of means of navigation.

Look a little closer, however, and you’ll see areas where pathways are formed off of the designed routes. Despite paved sidewalks crisscrossing the collegiate landscape, some people choose to create a route that’s better for them. Architects call these pathways “desire paths” and they’re part of the human experience. As we have skateboarders, bicyclists, joggers and walkers using the designed routes, we also have those people who get off the beaten path and take a “road less traveled.”

In the process of writing this post, I received an email from a faculty member who attended last week’s workshop.  She wrote about a student in one of her classes who struggled with reading and was falling behind with the assigned content for each class.  Because of the workshop, however, she taught the student how to use text-to-speech technology to convert some of the readings to audio. “It was a real light bulb moment” for the student.

And that’s the promise of UDL. In a lot of ways, we’re the architects of our students’ learning experiences. Like the designers of our collegiate grounds, we have to provide the space for our students to choose their own modes of transportation or develop those “desire paths” in our classrooms. UDL doesn’t mean we should decrease rigor or lower expectations. It just means that we have to “embrace our humanity” and be open to all of the possible ways to navigate our course content.

Handle with Care

Regular readers of this blog will probably remember that I’m a comic book geek. Over the life of this blog, I’ve written about teaching like Batman, searching for rare comics and creating comic books online. This week, I’m going to channel my inner Uncle Ben.

“With great power comes great responsibility.”

If you’re a comic book reader, you’ll recognize this quote from the Spider-Man series. Peter Parker’s uncle explains to him that being in a powerful position requires him to use that power responsibly. At the time, Uncle Ben doesn’t know that Peter is actually Spider-Man. He’s more providing some sage advice to the teenager. And then Uncle Ben is tragically killed by someone that Peter Parker/Spider-Man could have stopped earlier in the issue had he only intervened.

I share this quote this week because of the powerful roles we have as teachers. A few years ago, Inside Higher Education featured a study where 100 students were interviewed at an unnamed institution.   Undergraduates were more likely to major in a field if they had an inspiring and caring faculty member in an introductory course.  Students were also equally likely to write off an entire field if they had a single negative experience with a professor.  How we interact with our students can change the course of their academic careers.  That’s powerful stuff.

While I’ve shared this research before, the power of our roles has been really apparent to me recently. I’m on “special assignment” this semester as our College of Education adopts new assessments for our teacher candidates. One of the roles of this position is to oversee formal reviews with teacher candidates who have received unsatisfactory assessments. Depending on the nature of the assessment and the circumstances involved, a formal review can result in a teacher candidate being removed from the program. With the serious outcomes at play, a formal review can be a difficult process for teacher candidates.

But it can also be difficult for faculty too. Deciding a student’s academic fate is a harrowing experience and the situations are rarely clear-cut. As we navigate these decisions, we’re faced with the power and the responsibility of our roles. And that brings us to the title of this post. In many cases, we’re still dealing with developing adults. My cognitive science friends like to remind me that a person’s amygdala continues to develop until age 25 or so. The amygdala is the section of the brain that is believed to control risk management and rational decision-making. As we interact with students and influence their futures, we have to handle them with care.

Navigating the Hype Cycle

A few weeks ago, a conference presenter shared Gartner’s Hype Cycle and discussed how the phases described the adoption and implementation of technologies in education. For those of you who may not be familiar, the Hype Cycle is broken down into five key phases:


Innovation Trigger: A new technology or idea is introduced. The technology is presented as the “next big thing” and is conceptualized as solving numerous problems. Significant publicity fuels awareness and the technology’s visibility.

Peak of Inflated Expectations: As awareness grows, success stories fuel increased publicity. Some schools choose to take action while others do not. Failed adoptions also begin to emerge.

Trough of Disillusionment: Interest begins to wane as adoptions and implementations fail to deliver. Some technology developers fail while others work to improve their product to deliver on early promises.

Slope of Enlightenment: The affordances of the technology begin to be better understood. Best practices and emerging research fuels new adoptions. Second- and third-generation products appear from technology providers.

Plateau of Productivity: More widespread and mainstream adoption starts to take off. Criteria for assessing provider viability are more clearly defined. The technology’s broad market applicability and relevance are clearly paying off.

If you’ve been working in education for some time, you can probably recognize this pattern for different instructional technologies and pedagogical practices over the years. Take gamification as an example. A few years ago, gamified learning environments were lauded as a major innovation in education. Gamified websites emerged that offered all sorts of educational benefits. Now, years later, those proposed benefited haven’t happened. At least not yet. Looking at the Education Hype Cycle Report produced by Gartner, they identify gamification as “climbing the slope.” Maybe research is starting to catch up to earlier promises.

But that’s the larger take away from the Hype Cycle. Early phases of the Hype Cycle are fueled mostly by techno-optimism and rhetoric promoting the technology. People see a shiny new object or instructional strategy and only see the promise and possibility. Authors and theorists write books that trumpet all sorts of benefits and opportunities. Technology companies set up stands at conferences and create ambassadors to advocate all of the (perceived) benefits.  The hype fuels more adoption.

And then the crash happens. The technology enters the Trough of Disillusionment and early adopters start to question the cost of their efforts. They may question the huge financial investment or the impact on student learning or the lost professional development. The trough is real.

But it’s during this phase that research starts to catch up. The techno-optimism is replaced by techno-pragmatism. An evidence base emerges that informs design and implementation. As more evidence and research emerges, we reach the Plateau of Productivity and can use the instructional practice or technology with efficacy.

As I think about the Hype Cycle, I’m left wondering which phase is the best time for a school to adopt a technology or an innovation. While I want schools to take risks and innovate, I also recognize that many “next big things” have disappeared after falling off of the Peak of Inflated Expectations. If you’ve worked in schools for any significant length of time, you have a story of some initiative that failed. Many of those initiatives failed because they were poorly implemented. Others should have never been considered because of the lack of evidence to support their implementation. Their adoption happened way too early in the Hype Cycle.

So, how do we better navigate the Hype Cycle? One solution is to develop a healthy skepticism of new initiatives that are proposed. I’m not advocating for anyone to avoid change or innovation but we have to be cautious when advocating for widespread adoption of technologies or initiatives whose hype is largely built upon unsubstantiated claims. Rather than just being skeptical, however, it’s also important for us to take small-scale risks and create pilot studies that can help to inform the knowledge base. It’s through our contributions that new technologies and innovations develop an evidence base to help it move beyond the hype.

My Biggest Mistake

A colleague shared an article from the Chronicle of Higher Education that focused on whether university teachers should take attendance. The author, Kelli Marshall, draws on an essay from Murray Sperber and argues that, despite the mixed research on mandatory attendance policies, university faculty should forget about taking attendance. Marshall’s argument boils down to the three larger themes.

1. As instructors, we work with developing adults who need to take responsibility for the decisions.
2. Without an attendance policy, potentially disruptive students will choose not to attend.
3. Students choosing to attend can be viewed as an informal assessment of the class and the instructor.

While Marshall’s article offers some thought provoking fodder, it also reminded me of an attendance-related decision that I made over a decade ago. As I enter my 25th year of teaching, I count this decision as one of the biggest instructional mistakes I’ve made in my career.

I was teaching high school physics at the time and I had the privilege of teaching in a classroom that was the farthest point from the cafeteria. This always presented challenges, especially when teaching classes immediately after a lunch period. Because of the distance and the number of students walking in mass from the cafeteria, a few students would come to class late. One year, the occasional tardiness became a little more routine. Several students showed up late everyday, which I began to look as a complete affront to my power and legitimacy as a teacher. I had to do something.

I decided to institute a daily 10-point quiz that students completed when they walked into the room. If students were late, they wouldn’t get to take the quiz and their grade was impacted. After a few days, I had completely lost the class. They lost respect for me and many of the students who once enjoyed the class now saw it as a police state. As I created a policy to punish the students who were a few minutes late, I ended up punishing the whole class. I think some of the students never saw me the same way after instituting that policy.  That decision and the class’s reaction taught me two important lessons:

Pick your battles. Looking across the research the Marshall includes in her article, there are mixed results of instituting a mandatory attendance policy. One study, however, found that stressing over requiring attendance improved students’ rate of attendance, their academic performance and their attitudes about the class. I created a punitive policy that didn’t improve student tardiness but negatively impacted their perception of the class. Rather than instituting a punitive attendance policy, I should have just focused my attention on student learning and examined whether students’ tardiness had any impact on their academic performance.

Weigh the costs. In every instructional decision we make, there are larger impacts and costs. We may choose to spend more time on one subject, which causes us to spend less time on another. In this situation, I didn’t consider the larger cultural impacts of the decision and how it would affect students’ perception and attitude towards the class. And that’s the biggest lesson that I’ve learned from “my biggest mistake.” Classrooms are complex ecosystems that we as instructors need to manage with care. While some policies may seem to offer simple solutions, the resulting impacts are rarely simple.

A Good Conference Session?

With my role as the director of my institution’s Teaching and Learning Center, I attend a fair amount of conferences that focus on faculty development, innovative pedagogies and emergent teaching practices. Over the years, I’ve also attended a number of face-to-face sessions and webinars to inform the types of programming that I could offer on campus and the evidence-based instructional practices I could promote with my colleagues. Although I’ve attended some great sessions over the years, I’ve also sat through many unrewarding presentations that lacked focus or didn’t present any real usable information. After attending a horrible session a few years ago, I penned a post entitled “Presenting to Colleagues” that attempted to offer some suggestions to inform the design and delivery of conference sessions. Reading back over the suggestions (complement your slides, don’t recite them; engage your audience, provide a roadmap early, etc.), it’s clear that I was focusing on the mechanisms of presentations. After attending several great keynote sessions recently, I may have a different set of criteria to offer.

The Magna Teaching with Technology conference was held this weekend in Baltimore, MD. In full disclosure, I was the conference chair and helped to select the amazing keynotes that we heard. Julie Smith (author of Master the Media) and Josè Antonio Bowen (author of Teaching Naked) offered inspiring and insightful bookends to a Saturday full of thought-provoking sessions. It was Peter Doolittle’s Friday night plenary, however, that has me seeing conference presentations in new ways. In his keynote on Teaching, Learning, Technology, Memory and Research, Doolittle offered the audience three simple questions to use when attending one of the conference’s sessions:

  1. Where’s the processing?
  2. Where’s the design?
  3. Where’s the research?

While Doolittle offered this simple rubric as a way to assess the instructional practices that presenters offered, I thought it would be a good tool for creating strong conference presentations. While I know this won’t apply to many disciplinary conference sessions, if you’re facilitating a teaching and learning session, you should consider the following:

Where’s the processing?
In my original post, I argued that presenters needed to engage the audience. But engagement isn’t enough. Good presenters give attendees the opportunity to process the material being presented. This means more than providing five or ten minutes at the end of the session for questions. A simple strategy would be to build a few “think/pair/share” questions in throughout your session. Get the attendees to make sense of what you’re presenting and to see how the content you’re sharing applies to them and their institution.

Where’s the design?
Good conference sessions are designed to balance sharing information and fostering interaction. Learning, even during a conference session, is a social process and good facilitators design their sessions so that attendees learn from interacting with the content and with one another.

Where’s the research?
This is a big one for me. I want to see an evidence base behind the strategies and technologies being proposed. If someone is suggesting that attendees restructure an assignment, incorporate some novel instructional strategy or redesign an entire course, the presenter better be sharing some larger research base or offering some larger instructional framework to ground their work.  Share your citations and offer any data that can show the impact of the strategies you’re sharing.

While I know this three-question rubric won’t solve every presentation misstep, it may help to make your session more rewarding for attendees. By focusing on the underlying educational processes at play in a conference session, you can make your session a better learning experience for all.

Principal-Agent Online?

A few weeks ago, I referenced a research study that examined retention and performance of students in online and onsite collegiate classes. While I discussed some of the main findings in another blog post, I’ve been really contemplating a quote the authors shared at the end of the paper. The authors write:

online courses change the constraints and expectations on academic interactions. Professors and students do not interact face-to-face; they interact only by asynchronous written communication. Thus, students likely feel less oversight from their professors and less pressure to respond to professors’ questions. In the standard principal-agent problem, effort by the agent (student) falls as it becomes less observable to the principal (professor).” (Bettinger, Fox, Loeb & Taylor, 2017, p. 2873)

The authors identify that online students may feel less pressure and less motivation to participate because the professor isn’t physically present. As economists, the researchers connect this decrease in effort to the “principal-agent problem.” To be honest, prior to reading the study, I hadn’t heard of the principal-agent problem, so I looked it up. The Economics Times says the problem “arises when one party (agent) agrees to work in favor of another party (principal) in return for some incentives.” Economic comparisons like this are pretty common in higher education. We’re told to view our syllabi as “contracts” and we use student evaluations almost like businesses that survey their customers. Students even refer to a college degree as an investment in their future. With the pervasiveness of this economic verbiage in education, it’s not really that much of a stretch that these researchers would view grades as “incentives” and schooling as “work.” It’s the larger connection that the Bettinger and his colleagues make that has me thinking.

In their explanation, a student’s effort is “less observable” in online education but I don’t know if that’s really the case. When I teach face-to-face classes, my students physically attend the class but I don’t really know whether they’ve read the material to prepare for class. Sure, I can do some sort of assessment of their learning but I’ve witnessed many students who try to fake their way through these. I’ve also witnessed my share of students who were significantly contributing to face-to-face discussions without really knowing anything about the content at hand.

And that’s my point. Effort is only observable by monitoring students’ participation. As teachers, we observe students’ contributions in classroom discussions and through assessments and monitor their learning. But this can be done in online and face-to-face environments. I would also argue that, in some ways, effort and participation may be more observable online. When I teach an online class, I know when a student hasn’t logged into the course for several days or hasn’t accessed assigned content. I can also see whether students have read or contributed posts to a discussion forum. Students’ participation is observable in the data that the learning management system collects.

That’s the other big takeaway from this research study. Bettinger and his colleagues argue that students need to feel “oversight from their professors” in their classes. Online instructors typically refer to this as “teaching presence” (Garrison, Anderson & Archer, 2000). Personally, I work hard to establish a presence in my online classes so students know that I’m there to monitor their participation, assess their learning and provide feedback for their growth. While the researchers identify this a potential reason for the negative impact that the online classes in their study had on student performance and retention, I think other forces may be at play. While the principal-agent problem aligns with the larger incentive system that education represents, our classrooms are still social spaces where learning is fostered through interaction between students and instructors. Interestingly, these are not areas that Bettinger and his colleagues identify as factors in their work.

Bettinger, E., Fox, L., Loeb, S., & Taylor, E. S. (2017). Virtual Classrooms: How Online College Courses Affect Student Success. American Economic Review, 107(9), 2855-2875.

Garrison, D. R., Anderson, T., & Archer, W. (2000). Critical inquiry in a text-based environment: Computer conferencing in higher education. The Internet and Higher Education, 2(2-3), 87-105.