Navigating the Hype Cycle

A few weeks ago, a conference presenter shared Gartner’s Hype Cycle and discussed how the phases described the adoption and implementation of technologies in education. For those of you who may not be familiar, the Hype Cycle is broken down into five key phases:


Innovation Trigger: A new technology or idea is introduced. The technology is presented as the “next big thing” and is conceptualized as solving numerous problems. Significant publicity fuels awareness and the technology’s visibility.

Peak of Inflated Expectations: As awareness grows, success stories fuel increased publicity. Some schools choose to take action while others do not. Failed adoptions also begin to emerge.

Trough of Disillusionment: Interest begins to wane as adoptions and implementations fail to deliver. Some technology developers fail while others work to improve their product to deliver on early promises.

Slope of Enlightenment: The affordances of the technology begin to be better understood. Best practices and emerging research fuels new adoptions. Second- and third-generation products appear from technology providers.

Plateau of Productivity: More widespread and mainstream adoption starts to take off. Criteria for assessing provider viability are more clearly defined. The technology’s broad market applicability and relevance are clearly paying off.

If you’ve been working in education for some time, you can probably recognize this pattern for different instructional technologies and pedagogical practices over the years. Take gamification as an example. A few years ago, gamified learning environments were lauded as a major innovation in education. Gamified websites emerged that offered all sorts of educational benefits. Now, years later, those proposed benefited haven’t happened. At least not yet. Looking at the Education Hype Cycle Report produced by Gartner, they identify gamification as “climbing the slope.” Maybe research is starting to catch up to earlier promises.

But that’s the larger take away from the Hype Cycle. Early phases of the Hype Cycle are fueled mostly by techno-optimism and rhetoric promoting the technology. People see a shiny new object or instructional strategy and only see the promise and possibility. Authors and theorists write books that trumpet all sorts of benefits and opportunities. Technology companies set up stands at conferences and create ambassadors to advocate all of the (perceived) benefits.  The hype fuels more adoption.

And then the crash happens. The technology enters the Trough of Disillusionment and early adopters start to question the cost of their efforts. They may question the huge financial investment or the impact on student learning or the lost professional development. The trough is real.

But it’s during this phase that research starts to catch up. The techno-optimism is replaced by techno-pragmatism. An evidence base emerges that informs design and implementation. As more evidence and research emerges, we reach the Plateau of Productivity and can use the instructional practice or technology with efficacy.

As I think about the Hype Cycle, I’m left wondering which phase is the best time for a school to adopt a technology or an innovation. While I want schools to take risks and innovate, I also recognize that many “next big things” have disappeared after falling off of the Peak of Inflated Expectations. If you’ve worked in schools for any significant length of time, you have a story of some initiative that failed. Many of those initiatives failed because they were poorly implemented. Others should have never been considered because of the lack of evidence to support their implementation. Their adoption happened way too early in the Hype Cycle.

So, how do we better navigate the Hype Cycle? One solution is to develop a healthy skepticism of new initiatives that are proposed. I’m not advocating for anyone to avoid change or innovation but we have to be cautious when advocating for widespread adoption of technologies or initiatives whose hype is largely built upon unsubstantiated claims. Rather than just being skeptical, however, it’s also important for us to take small-scale risks and create pilot studies that can help to inform the knowledge base. It’s through our contributions that new technologies and innovations develop an evidence base to help it move beyond the hype.


My Biggest Mistake

A colleague shared an article from the Chronicle of Higher Education that focused on whether university teachers should take attendance. The author, Kelli Marshall, draws on an essay from Murray Sperber and argues that, despite the mixed research on mandatory attendance policies, university faculty should forget about taking attendance. Marshall’s argument boils down to the three larger themes.

1. As instructors, we work with developing adults who need to take responsibility for the decisions.
2. Without an attendance policy, potentially disruptive students will choose not to attend.
3. Students choosing to attend can be viewed as an informal assessment of the class and the instructor.

While Marshall’s article offers some thought provoking fodder, it also reminded me of an attendance-related decision that I made over a decade ago. As I enter my 25th year of teaching, I count this decision as one of the biggest instructional mistakes I’ve made in my career.

I was teaching high school physics at the time and I had the privilege of teaching in a classroom that was the farthest point from the cafeteria. This always presented challenges, especially when teaching classes immediately after a lunch period. Because of the distance and the number of students walking in mass from the cafeteria, a few students would come to class late. One year, the occasional tardiness became a little more routine. Several students showed up late everyday, which I began to look as a complete affront to my power and legitimacy as a teacher. I had to do something.

I decided to institute a daily 10-point quiz that students completed when they walked into the room. If students were late, they wouldn’t get to take the quiz and their grade was impacted. After a few days, I had completely lost the class. They lost respect for me and many of the students who once enjoyed the class now saw it as a police state. As I created a policy to punish the students who were a few minutes late, I ended up punishing the whole class. I think some of the students never saw me the same way after instituting that policy.  That decision and the class’s reaction taught me two important lessons:

Pick your battles. Looking across the research the Marshall includes in her article, there are mixed results of instituting a mandatory attendance policy. One study, however, found that stressing over requiring attendance improved students’ rate of attendance, their academic performance and their attitudes about the class. I created a punitive policy that didn’t improve student tardiness but negatively impacted their perception of the class. Rather than instituting a punitive attendance policy, I should have just focused my attention on student learning and examined whether students’ tardiness had any impact on their academic performance.

Weigh the costs. In every instructional decision we make, there are larger impacts and costs. We may choose to spend more time on one subject, which causes us to spend less time on another. In this situation, I didn’t consider the larger cultural impacts of the decision and how it would affect students’ perception and attitude towards the class. And that’s the biggest lesson that I’ve learned from “my biggest mistake.” Classrooms are complex ecosystems that we as instructors need to manage with care. While some policies may seem to offer simple solutions, the resulting impacts are rarely simple.

A Good Conference Session?

With my role as the director of my institution’s Teaching and Learning Center, I attend a fair amount of conferences that focus on faculty development, innovative pedagogies and emergent teaching practices. Over the years, I’ve also attended a number of face-to-face sessions and webinars to inform the types of programming that I could offer on campus and the evidence-based instructional practices I could promote with my colleagues. Although I’ve attended some great sessions over the years, I’ve also sat through many unrewarding presentations that lacked focus or didn’t present any real usable information. After attending a horrible session a few years ago, I penned a post entitled “Presenting to Colleagues” that attempted to offer some suggestions to inform the design and delivery of conference sessions. Reading back over the suggestions (complement your slides, don’t recite them; engage your audience, provide a roadmap early, etc.), it’s clear that I was focusing on the mechanisms of presentations. After attending several great keynote sessions recently, I may have a different set of criteria to offer.

The Magna Teaching with Technology conference was held this weekend in Baltimore, MD. In full disclosure, I was the conference chair and helped to select the amazing keynotes that we heard. Julie Smith (author of Master the Media) and Josè Antonio Bowen (author of Teaching Naked) offered inspiring and insightful bookends to a Saturday full of thought-provoking sessions. It was Peter Doolittle’s Friday night plenary, however, that has me seeing conference presentations in new ways. In his keynote on Teaching, Learning, Technology, Memory and Research, Doolittle offered the audience three simple questions to use when attending one of the conference’s sessions:

  1. Where’s the processing?
  2. Where’s the design?
  3. Where’s the research?

While Doolittle offered this simple rubric as a way to assess the instructional practices that presenters offered, I thought it would be a good tool for creating strong conference presentations. While I know this won’t apply to many disciplinary conference sessions, if you’re facilitating a teaching and learning session, you should consider the following:

Where’s the processing?
In my original post, I argued that presenters needed to engage the audience. But engagement isn’t enough. Good presenters give attendees the opportunity to process the material being presented. This means more than providing five or ten minutes at the end of the session for questions. A simple strategy would be to build a few “think/pair/share” questions in throughout your session. Get the attendees to make sense of what you’re presenting and to see how the content you’re sharing applies to them and their institution.

Where’s the design?
Good conference sessions are designed to balance sharing information and fostering interaction. Learning, even during a conference session, is a social process and good facilitators design their sessions so that attendees learn from interacting with the content and with one another.

Where’s the research?
This is a big one for me. I want to see an evidence base behind the strategies and technologies being proposed. If someone is suggesting that attendees restructure an assignment, incorporate some novel instructional strategy or redesign an entire course, the presenter better be sharing some larger research base or offering some larger instructional framework to ground their work.  Share your citations and offer any data that can show the impact of the strategies you’re sharing.

While I know this three-question rubric won’t solve every presentation misstep, it may help to make your session more rewarding for attendees. By focusing on the underlying educational processes at play in a conference session, you can make your session a better learning experience for all.

Principal-Agent Online?

A few weeks ago, I referenced a research study that examined retention and performance of students in online and onsite collegiate classes. While I discussed some of the main findings in another blog post, I’ve been really contemplating a quote the authors shared at the end of the paper. The authors write:

online courses change the constraints and expectations on academic interactions. Professors and students do not interact face-to-face; they interact only by asynchronous written communication. Thus, students likely feel less oversight from their professors and less pressure to respond to professors’ questions. In the standard principal-agent problem, effort by the agent (student) falls as it becomes less observable to the principal (professor).” (Bettinger, Fox, Loeb & Taylor, 2017, p. 2873)

The authors identify that online students may feel less pressure and less motivation to participate because the professor isn’t physically present. As economists, the researchers connect this decrease in effort to the “principal-agent problem.” To be honest, prior to reading the study, I hadn’t heard of the principal-agent problem, so I looked it up. The Economics Times says the problem “arises when one party (agent) agrees to work in favor of another party (principal) in return for some incentives.” Economic comparisons like this are pretty common in higher education. We’re told to view our syllabi as “contracts” and we use student evaluations almost like businesses that survey their customers. Students even refer to a college degree as an investment in their future. With the pervasiveness of this economic verbiage in education, it’s not really that much of a stretch that these researchers would view grades as “incentives” and schooling as “work.” It’s the larger connection that the Bettinger and his colleagues make that has me thinking.

In their explanation, a student’s effort is “less observable” in online education but I don’t know if that’s really the case. When I teach face-to-face classes, my students physically attend the class but I don’t really know whether they’ve read the material to prepare for class. Sure, I can do some sort of assessment of their learning but I’ve witnessed many students who try to fake their way through these. I’ve also witnessed my share of students who were significantly contributing to face-to-face discussions without really knowing anything about the content at hand.

And that’s my point. Effort is only observable by monitoring students’ participation. As teachers, we observe students’ contributions in classroom discussions and through assessments and monitor their learning. But this can be done in online and face-to-face environments. I would also argue that, in some ways, effort and participation may be more observable online. When I teach an online class, I know when a student hasn’t logged into the course for several days or hasn’t accessed assigned content. I can also see whether students have read or contributed posts to a discussion forum. Students’ participation is observable in the data that the learning management system collects.

That’s the other big takeaway from this research study. Bettinger and his colleagues argue that students need to feel “oversight from their professors” in their classes. Online instructors typically refer to this as “teaching presence” (Garrison, Anderson & Archer, 2000). Personally, I work hard to establish a presence in my online classes so students know that I’m there to monitor their participation, assess their learning and provide feedback for their growth. While the researchers identify this a potential reason for the negative impact that the online classes in their study had on student performance and retention, I think other forces may be at play. While the principal-agent problem aligns with the larger incentive system that education represents, our classrooms are still social spaces where learning is fostered through interaction between students and instructors. Interestingly, these are not areas that Bettinger and his colleagues identify as factors in their work.

Bettinger, E., Fox, L., Loeb, S., & Taylor, E. S. (2017). Virtual Classrooms: How Online College Courses Affect Student Success. American Economic Review, 107(9), 2855-2875.

Garrison, D. R., Anderson, T., & Archer, W. (2000). Critical inquiry in a text-based environment: Computer conferencing in higher education. The Internet and Higher Education, 2(2-3), 87-105.

My Rules of Tech

When I was in high school, Mr. Haser was one of my favorite teachers. Mixed between his lessons on chemical bonding and electron configuration, Mr. Haser would blend in lessons about life. Through the course of the academic year, he introduced three self-proclaimed Haser’s laws. While it’s been over thirty years since I sat in Mr. Haser’s chemistry class, I can still recall each of his “laws.”

Haser’s first law: Hot glass looks like cold glass.
Haser’s second law: Your neighbor is dumber than you.
Haser’s third law: When in doubt, tell the truth.

While Haser’s first law is definitely subject specific, his other laws focus more on navigating the world honestly and with purpose.

In the spirit of Haser’s laws, I offer my Rules of Technology. While I’ve shared all of these in a class or presentation at some point, they’re not meant to solve all of your technological ills.  Instead, they offer some lighthearted advice for navigating your digital life. If you have a technological rule to share, feel free to write a comment below.

Technology Rule #1: Technology will break your heart. If not today, someday soon. You know the scenario. You have a major assignment due or you’re finishing some big project. And then… your computer crashes and you lose everything. When we least expect it or need it, our hard drives fail and our Internet goes down. Rule #1 communicates the personal toll that technology can play on our lives and echoes that age-old adage: Save and save often.

Technology Rule #2: Focus on being effective. You can work on perfection after that. This is my take on “don’t let the perfect be the enemy of the good.” Don’t stress over selecting the best PowerPoint slide color or the best font. Craft an effective message that clearly articulates your objectives. If you’re creating some instructional materials, make sure it effectively supports student learning. You can work on perfection after that.

Technology Rule #3: Almost everybody hates the sound of their recorded voice. This is actually somewhat research-based. Because of the structure of our inner ear, we hear our voices differently live than when we hear it through a recording. I offer this for all of those instructors who record screencasts for their students. Unless you’re William Shatner or Alex Trebek, you’re probably going to cringe when you hear your voice. It’s okay. You’re just like the rest of us.

Technology Rule #4: Wait to send that email! You know EXACTLY what I’m talking about. You’ve just received some snarky email from a student or a colleague and you’ve spent fifteen emotionally charged minutes crafting the perfect response. Wait. Just wait. Save the email to draft and review it tomorrow. With some time, you can evaluate whether you still feel the same way.

Technology Rule #5: Shut it off. Take a few minutes and shut off your phone and power down your laptop. Go take a walk or ride your bike. Our lives have become so digitally complex that we’re almost always connected. Shut it off. Some readers are probably worried that they’ll miss something important. Others are probably thinking how boring life would be without all of these devices. But research is emerging that shows that boredom can foster creativity and innovation, which is never a bad thing.

The Failures of Online Education?

A research paper has been circulating around my institution recently.  Published in the September 2017 issue of American Economic Review, the research examined students who had taken online and face-to-face classes at a for-profit institution. Comparing the grades and retention rates of students enrolled in both formats, the authors write:

“We find that taking a course online, instead of in-person, reduces student success and progress in college. Grades are lower both for the course taken online and in future courses. Students are less likely to remain enrolled at the university.

This is pretty compelling stuff.  Especially considering that the study involved four years of data with over 230,000 students enrolled in 168,000 sections of more than 750 different online classes. That’s a lot of data. The other part that’s novel is that the university offers almost identical coursework in face-to-face and online formats.  The classes use the same syllabus, use the same assessments and assignments and have similar class sizes. The big difference is that online participation is mostly through recorded lessons and asynchronous discussion forums while face-to-face involves real time student-student and student-instructor interactions.

Looking at the study from a methodological or analytical perspective, it’s hard to critique it. The study involves thousands of students who self-enrolled in similar face-to-face and online classes. The study is also longitudinal in that it tracks students’ future performance and success over a four-year span. The researchers also wisely remove students from the participant pool who may not have been able to enroll in the face-to-face classes due to distance from the physical locations. The only real criticism that some of my colleagues had was that the research focused on a large for-profit university that some inferred was a predatory institution. Otherwise, it’s a solid study.

I guess what I’m saying is that the findings can’t be easily dismissed from applying a critical perspective. We might be able to question its generalizability to other student populations but we can’t dismiss the big takeaway. This study shows convincingly that online classes negatively impacted student learning and their future success for the student enrolled in this institution.

So, what do we do with this information? Some of my colleagues are seeing this research as evidence that we should do away with online classes. For a lot of financial and cultural reasons, I don’t see this as likely. Rather than dismissing online education outright, the study offers a road map for our institutions to do some self-study. The type of data collected for this study can be easily obtained by almost any institution. But, how many of our campuses have? From my perspective, those are the real questions each of us needs to ask on our respective campuses:

  • How are online classes serving our students’ learning needs?
  • How can we be doing it better?

It’s easy to answer these in the abstract. Instead, we should use this study to start a larger, evidence-based conversation on our campuses about how we can close any performance gaps that our online students may be experiencing and work institutionally to provide the best online learning environments for them.

The Branded Teacher

I want to start this post by conveying my deepest respect for teachers. Over my 25 years of teaching in K-12 and higher education environments, I’ve worked with literally thousands of innovative and dedicated professionals. They spend countless hours creating lessons and grading papers and often spend hundreds of dollars out of their own money for classroom materials. They deserve our admiration and support.

I have concerns, though. But not with teachers’ quality or their dedication. Rather, I’m concerned about a growing trend in schools and in professional conferences: the branded teacher. If you know some teachers in schools, you likely know a Google Certified Innovator or an Apple Distinguished Educator.  Or maybe you know a Seesaw Teacher Ambassador or a Microsoft Innovative Educator Expert.  These are just a few of the big corporations who have developed branding relationships with educators. While these programs offer amazing professional development opportunities for teachers, I worry about the potential influence that these branding relationships could have on the profession, on our schools and on our students.

I first took notice of the potential influence of these branding relationships a few years ago when I served on the review committee for a statewide educational technology conference. As I reviewed conference proposals, I could see that some presentations appeared almost as if they were commercials for a specific technology. On some, a company representative was even listed as a co-presenter. After I raised concerns to the conference organizers, we tried to develop a more transparent review process to require proposers to disclose any existing branding relationships. The practice became pervasive enough that I chose to discontinue reviewing proposals for that conference.

One may ask, “So, what’s the big deal?” As I mentioned earlier, I have tremendous respect for teachers and I celebrate their efforts for professional growth and recognition. My concern lies with the potential influence these branding relationship can have on our schools. But I’m not the only one. Last week, the New York Times published an article detailing how widespread these branding relationships are and how some lawmakers and education experts have concerns. In the article, a Columbia University professor worries that some teachers can be “seduced to make greater use of the technology, given these efforts by tech companies.” A Maine attorney general explained, “any time you are paying a public employee to promote a product in the public classroom without transparency, then that’s problematic.”

As I’ve mentioned in the past, I am unaffiliated with any corporation. I serve on the advisory board of two conferences, but I regularly disclose that information when I’m working with colleagues or when I’m blogging about my experiences with those groups. I have chosen to remain unaffiliated because I didn’t want my students or my colleagues to question my opinions or my advice.  Whether good or bad, my recommendations are not built on any relationships I have with any company, corporation or group.  They are my own.

To be clear, I’m not criticizing any teacher for developing a branding relationship with a company. For some schools, a teacher’s participation in a branding program can help the district acquire much needed technology or supplies. Also, with the low salaries that some teachers are paid, I totally understand their desire to seek additional compensation. But I worry about the ethical implications these relationships create. For instance, when I go for a medical check-up, I would hope that any prescription or treatment that my doctor recommends would be based on my needs as a patient and not on the doctor’s prior relationship with a pharmaceutical company. But that might not the case.  In a 2016 study of 280,000 doctors, researchers found that physicians’ “receipt of industry-sponsored meals was associated with an increased rate of prescribing the promoted brand-name medication to patients.” I think that many people would find that level of influence concerning.

And that’s my concern about branding relationships in education. Studies have found that teachers make over 1500 educational decisions each day. I worry that too many of those decisions are guided by the tacit influence of branding relationships with corporations rather than on the influence of best practices or from educational research.