Raising the Floor

I’m teaching an online class with several graduate students enrolled in our online teaching program. In a discussion forum last week, one of the students brought up the use of rubrics. Since many of the students in this class are also practicing teachers in local schools, the rubric comment struck a nerve and sparked a lively discussion with the group. Looking across the comments from the class, it seems there are a lot of strong feelings (positively and negatively) about the use of rubrics.

For those readers who may be unfamiliar with the concept, a rubric is a tool that outlines the criteria for which student work will be assessed. A well-designed rubric provides a uniform standard for educators to evaluate subjective assignments which can make the assessment process easier. When shared with students prior to the start of an activity, a rubric can provide a road map for students so they know which areas of the assignment are the most important. Rubrics also inject transparency in the assessment process, allowing students to know exactly how they’ll be assessed for a given assignment.

While rubrics sound like a critical tool for teaching and learning, I find that few educators enjoy making them or using them. I attribute this to several reasons. First, good rubrics are hard to make. It can be difficult to capture the essence of an assignment in objective and observable terms. It can also be challenging to break up an assignment into specific criteria with clear levels of development and quality. While tools like iRubric and Rubistar can be provide a good starting point, developing a good rubric requires a great deal of thought and energy. I also find that few educators hit the mark with their first version of a rubric. Most rubrics will need to go through multiple revisions before they’re really strong. Some rubrics may never get there.

Beyond the challenging development process, some educators also have reservations with how students respond to the use of rubrics.  I’ll be the first to admit that rubrics can have a normalizing effect on students’ creativity. When the elements of an assignment are detailed clearly and objectively, rubrics have a way of “lowering the ceiling” of student work. When I provide rubrics for an assignment, I find that I get a lot of really good products from students but fewer “out-of-the-box,” “knock my socks off” creations. But I also get fewer poor student creations as well. In a way, rubrics work to “raise the floor” with student submissions. Since students know how they’ll be assessed, they have a better idea of the minimum expectations that will be allowed. Depending on the nature of the class, the assignment or the students, “raising the floor” may be enough of a reason to incorporate rubrics. While I doubt this rationale will make any educator fall in love with the concept of rubrics, it may promote their use.

Advertisements

Checking off Checklists

A few months ago, I posted about how my online students have been requesting that I add checklists to allow them to self-monitor their progress. It was on my list of things to do to improve my classes and I’m happy to report that I was able to incorporate checklists in my online classes that started a few weeks ago. Before we get to how they’ve been used, let’s review.

Checklists help students to be more metacognitive and to self-regulate their learning. Well-defined checklists can make expectations clear for students and help them monitor their progress in completing the expectations. When completing complex assignments, checklists can help students better understand the individual tasks embedded within the complexity. This is especially helpful in my online classes. While I like to think I’ve organized my classes pretty linearly, there are lots of moving parts each week. Checklists can reduce this chaos for students and help them focus on the specific aspects they need to complete.

Besides the direct connections to learning, checklists are also one of the ways to incorporate Universal Design for Learning (UDL) in your classes. One of the principles of UDL is “providing multiple means of action and expression.” This broad principle can be more easily understood when the supporting guidelines are considered. Checklists fall under the guideline for executive functioning and would help students “develop and act on plans to make the most out of learning” (CAST, 2018). Digging deeper into UDL, checklists help students set appropriate goals, strategically plan their work, manage course information and resources, and monitor their own progress. While checklists may seem like a simple strategy, they can have a huge impact on student learning.

As we enter the third week of my two online classes, I wanted to take a look to see whether students were using the checklists and whether there were any correlations with students’ academic performance. For each module overview, I included a checklist which I listed as a “self-assessment” and explained that students could us it to monitor their progress.  I also explained that using the checklists was completely optional but I stressed that students should use them to “stay on track” with course expectations.

Across the 35 students currently enrolled in my two online classes, 28 have consistently used the checklists for the first three modules. Only two students have chosen not to use the checklists at all. Looking at the performance of the students in the classes, the seven students who are either not using the checklists or using them inconsistently are on average performing 6-7% below the average in their classes. Definitely some interesting findings.

Before any reader gets too excited about the amazing powers of checklists, I think some restraint may be warranted. First off, this isn’t anywhere close to a well-designed research study. I basically looked into the statistics and saw that some students were using the checklists and others were not. The ones who were using them were doing well for the most part. The ones who were not using them weren’t doing as well. Just an anecdotal observation.

Expanding the lens, however, may allow for other observations. Overall, the students who were using the checklists were also the ones who logged in more often, read more of the posts from their peers and accessed course content more regularly. While I was hoping the checklists would be a way to support struggling students, it looks as if the highly motivated, Type A students were the ones who were actually using them. At least so far. I’ll revisit the data after the courses have ended and report back.

My Summer Reading List

This is always one of the more popular posts that I write each year. Each May, I share the academic books that I’m planning to tackle this summer in preparation for the next academic year. If you’re interested what I’ve read in past summers, definitely check out the links at the bottom of this post.

1. Mindful Tech: How to Bring Balance to our Digital Lives (Levy, 2017)
In full disclosure, I’ve actually already started reading this book. After seeing this text referenced at a few conference sessions, I felt like it was time to check it out. In the book, Levy, a professor at the Information School of the University of Washington, discusses ways to limit technology use and more mindfully engage with our devices. Part of my motivation to read this text is personal. I need to bring a little balance to my digital choices and step away from my smartphone more regularly. I’m also interested in how I can support more “mindful tech” use with my students and my children.

2. The New Education: How to Revolutionize the University to Prepare Student for a World in Flux (Davidson, 2017)
Cathy Davidson is one of those educational thinkers who is constantly promoting innovation in schools.  She follows this trajectory in this book by discussing how our current structure of higher education doesn’t prepare students for the new, information age economy. Davidson also offers suggestions for restructuring colleges and universities to better prepare students.

3. Algorithms of Oppression: How Search Engines Reinforce Racism (Noble, 2018)
A couple of colleagues are planning to organize a Faculty Learning Community (FLC) with this text in the Fall. Over the last few semesters, our university has offered several FLCs focusing on race-related topics. Last fall, we offered an FLC on Raising Race Questions (Michael, 2014) and this spring, we offered another FLC on Stamped from the Beginning: The Definitive History of Racist Ideas in America (Kendi, 2017). Both were tremendous successes and provided springboards for difficult conversations. I’m hoping to use this summer to get a jump start on the FLC.

4. iGen: Why Today’s Super-Connected Kids Are Growing Up Less Rebellious, Less Happy – And Completely Unprepared for Adulthood-And What That Means for the Rest of Us (Twenge, 2017)
I have to admit that I’m usually skeptical of generational research that uses survey data to make broad generalizations about populations of people. Twenge, a psychology professor at San Diego State University, has made a career doing this kind of work. I purchased this book after a colleague gave a presentation on campus recently and I’m looking forward to interrogating the ideas that Twenge presents.

5. Daring Greatly: How the Courage to Be Vulnerable Transforms the Way We Live, Love, Parent, and Lead (Brown, 2015)
This is another colleague recommendation. I’ve written about Brene Brown a bunch of years ago after seeing one of her TED videos. I’m looking forward to reading her book this summer and preparing myself to “strive valiantly and dare greatly.”

Summer Reading List 2017
Summer Reading List 2016
Summer Reading List 2015
Summer Reading List 2014

Perceptions and Reality

Across the years, I’ve written about the value of active learning numerous times. In 2014, I wrote about a comprehensive meta-analysis on STEM-related college classes. The study compiled data from 225 different studies on active learning in Science, Technology, Engineering and Mathematics related courses and found that students in lecture-based courses were 1.5 times more likely to fail than students in classes that utilized active learning. Across the studies, the average failure rates were 21.8% in classes that employed active learning and 33.8% in traditional lecture classroom environments. Based on the reported participation numbers across the studies, the researchers estimated “there would be over $3,500,000 in saved tuition dollars for the study population, had all students been exposed to active learning.”

I’m returning to this 2014 post and research because of a recent study that was published in Science (and reported on the Faculty Focus blog). Described as the “largest-ever observational study of undergraduate STEM education,” the study monitored almost 550 faculty teaching 700 courses at 25 colleges and universities in the United States and Canada. The results were pretty alarming. 55% of the STEM classroom interactions involved lecture-based instruction. Faculty Focus interviewed one of the researchers, Marilyne Stains for the University of Nebraska-Lincoln, and discussed some of the findings. In the post, Stains discussed how their research used direct observation over self-reported surveys.

“Surveys and self-reports are useful to get people’s perceptions of what they are doing,” Stains said. “If you ask me about how I teach, I might tell you, ‘I spend 50 percent of my class having students talk to each other.’ But when you actually come to my class and observe, you may find that it’s more like 30 percent. Our perception is not always accurate.”

And that’s where the study and the Faculty Focus article offer some assistance. In their research, Stains and her colleagues used a tool called COPUS (Classroom Observation Protocol for Undergraduate STEM) to conduct their observations. The tool was funded by the National Science Foundation and is available for free online so instructors can study their own instructional practices. There are even instructions for collecting data and a video to improve inter-rater reliability.  A motivated STEM instructor could have a colleague or two observe their classroom and better identify how their classes are actually taught. In the study, the researchers suggest conducting at least four observations to provide “a reliable characterization of instructional practices.”

Another interesting finding from the study was that despite faculty identifying classroom layout and class size as being barriers to implementing active learning strategies, “flexible classroom layouts and small course sizes do not necessarily lead to an increase in student-centered practices.” Looking at the data, regardless of the classroom physical layout, didactic instructional strategies were employed in most of the observed lessons. Considering the overwhelming research on the academic benefits for active learning, I find this shocking. But so do the researchers. At the end of the article, they call for institutions to challenge “the status quo” and to revise “their tenure, promotion, and merit-recognition policies to incentivize and reward implementation of evidence-based instructional practices.”  And that’s a great starting point but I wonder whether it’s enough.

I’m reminded of another blog post I shared in 2016 where I discussed “alternative frameworks” and their impact on people’s beliefs and actions. In science, these alternative frameworks impact how we teach different concepts. For instance, I can tell students thousands of times that gravity acts on heavy and light object the same way and that they fall (and accelerate) at the same rates when air resistance is disregarded. But their alternative frameworks get in the way. Their lived experiences have taught them differently and me telling them doesn’t change their perceptions.

In a way, that’s what has happened with the active learning research. Despite hearing about the benefits of active learning, teachers perceive that lecture works better and me them won’t change their teaching. Using promotion, tenure and merit-recognition systems to force teachers to employ student-centered teaching may change their actions but won’t change their perceptions of how students learn. Maybe the COPUS system could be used to support a Scholarship of Teaching and Learning study so faculty and departments can research how using active learning strategies impact student performance. It’s a little harder than just telling (or forcing) people to change their practice but, in the long run, it may confront both the perceptions and the reality of their work.

One Simple Way?

Several years ago, my doctor sat me down and gave me some harsh news. With my family history of cancer, diabetes, heart disease and strokes, he explained that it was likely I was heading down the same medical road that my parents and grandparents had. Each of those family members navigated difficult ailments before dying young. My doctor explained that to change the course of my family history I needed to eat better, get more active and lose weight. He challenged me to lose thirty pounds over the next year and to develop habits that could help me keep the weight off.

After meeting with my doctor, I started exercising regularly and eating better. I didn’t own a bathroom scale so, in order to chart my progress, I ordered one online. When the scale arrived and I unpacked it, a short pamphlet was included among the scale’s direction. It read something like, “Do you want one simple way to meet your weight loss goals? Weigh yourself everyday.” As I read the pamphlet, I thought maybe it was just a clever way to sell bathroom scales. Being a little skeptical of this “one simple way,” I did some online searches and found research to back its claims. So I adopted the habit. Three years later, I’m happy to report that I still weigh myself daily. More importantly, I lost more than thirty pounds and have been able to successfully keep it off.

I was reminded of this simple strategy recently in a discussion with a few junior faculty. One had recently received word that he had obtained tenure and commented that he was excited about never being forced to do student evaluations again. On our campus, student evaluations are optional for tenured faculty. Some tenured faculty choose to still have their students complete evaluations, but many do not. Most faculty recognize evaluations as being imperfect measures of their teaching and would choose to avoid them if they could. While I’ve written about the challenges with student evaluations in the past, I offered different advice to this soon-to-be-tenured colleague. Still do student evaluations. Every semester.

Since I received tenure five years ago, I’ve had my students complete evaluations on my teaching every single semester. Some are difficult to stomach (like the ones I wrote about last January). Overall, however, the evaluations help me stay focused on being student-centered and being accessible and responsive to student needs. The evaluations also provide a degree of self-accountability where I must open the envelope, stare at the numbers and make sense of them. In a lot of ways, it’s like standing on a bathroom scale.

When I stand on a bathroom scale each morning, I sometimes see numbers I don’t like. Maybe I had an extra piece of pizza (or two) while watching that football game. Or maybe I had cake and ice cream after dinner to celebrate my daughter’s birthday. Or maybe I skipped a few days at the gym and am seeing the return on my lack of investment. Seeing those numbers, however, motivates me to do better. They also help me focus on short-term goals and successes. String those together and they can lead to longer term ones.

And that’s what regularly completing students evaluations can do. Student evaluations can motivate us to stay focused on the short-term impacts of our work and remind us of our roles with students.  Sure, sometimes we’ll receive evaluation numbers that we don’t like or agree with. But we can commit to doing better. Just like the bathroom scale, the evaluation data only mean things in context. They’re just one piece of data to examine. But they off a simple way to stay focused on our growth as teachers, both in the short term and the long term.

The Misconception of Kindness

I get mixed reviews on Rate My Professors. For every student who rates me well, there’s another student or two who has rated me poorly. I try to not get too worked up over the ratings. For the most part, they’re sort of like Yelp reviews. People only really post a review on Yelp when their experiences are amazingly good or amazingly bad. The vast majority of people who had a completely ordinary and solid dining event will never review their experience at all. And I think most people would tolerate a solid experience over a negative one. But I digress.

Returning to my Rate My Professor reviews, to me, one comment stands out among the ratings.  One student posted:

“His feedback is very blunt and to the point, so be prepared for that.”

I don’t know what motivated this student to write this or to give me a poor rating, but I’ve thought a lot about that comment over the last two years. For the most part, I think the student’s assessment of my feedback is on the mark. I also wonder whether that’s the reason that some of my undergraduate students don’t find me particularly empathetic. At least that’s what some of my student evaluations say.  And I find it troubling.  Here’s why.

Over the years on this blog, I written many posts dedicated to providing quality feedback to support students’ growth. Across all of the posts, however, there’s never been a real dedicated focus on how students’ receive feedback. I’m a big subscriber to Grant Wiggins’ Seven Key Elements to Effective Feedback.  To foster student learning and development, Wiggins writes, teacher feedback must reflect seven essential elements:

  • Effective instructor feedback is goal-referenced.
  • Effective instructor feedback is tangible and transparent.
  • Effective instructor feedback is actionable.
  • Effective instructor feedback is timely.
  • Effective instructor feedback is ongoing.
  • Effective instructor feedback is consistent
  • Effective instructor feedback progresses towards a goal

And I provide that feedback. My worry, however, is that some students are not used to getting this type of in-depth feedback and don’t know how to respond to it emotionally. When students are accustomed to getting a few check marks on their papers and a “Great job!” written at the end, they see the professor who provides detailed feedback for growth as being the outlier. They rate the professor as being blunt and to the point and not having much empathy. To some degree, my students see me as being unkind with my feedback.

Being the hyper-reflective teacher that I am, I’ve thought a lot about this and I think there is a prevailing misconception of kindness, one that trades long-term impacts for the short-term ones. Let me explain.

Take the student who gets the “Great job!” on their paper but receives little other substantive comments from her professor. The student is receiving feedback that probably feels good. It reinforces her perceptions of the amount of work that she’s dedicated and her perceptions of her ability. She probably sees the professor as being kind and supportive.

But this is only a short-term emotion with short-term impacts. If the student’s work is not really high quality, the student will eventually reach some place in her educational journey where her development or progress will be stunted. She’ll reach a point where she sees that she may lack the skills to succeed at the expected level. She’ll recognize that her education hadn’t prepared her for that next step.

But I tend to focus on long-term impacts. While I’m (mostly) okay with students calling me direct or blunt or lacking empathy, I hope they’ll realize at some point down the road that the detailed feedback I gave wasn’t trying to hurt their feelings but was intended to help prepare them for whatever comes next. That’s long-term kindness.

I heard someone say recently that “Frustration isn’t part of learning.  It IS learning.” And maybe that’s the motto I need to share with more of my students. I know that the direct (and blunt) feedback I give to students can be frustrating at times. But it’s hardly unkind.

Be the Light in the Clouds

Imagine you’re a Viking sailor and you’re trying to navigate uncharted waters. If the skies are clear, you can navigate using the position of the sun during the day or possibly the stars at night. But what about the cloudy or foggy times that a Viking sailor would confront in the icy Northern Atlantic? Norse legend has it that the Vikings used something called a sunstone. When the conditions were bad, these ancient mariners would look through a crystal that reveals distinct patterns of light in the cloudy sky. The sailors would use the patterns of light to traverse the ocean despite any clear view of the sun. Despite the clouds and fog, the sunstones helped guide the Vikings through the roughest of waters.

I read about the sunstone recently and how scientists are suggesting that the Vikings simply used pieces of translucent calcite, cordierite and tourmaline to guide their ships. These crystals filter light due to a process called “polarization” which enabled the Vikings to see concentric rings around the sun, even in the foggiest of conditions. Despite the visibility, if the Vikings navigated using the sunstone, they were more likely to reach their destination.

When I read the Viking article, I thought the sunstone was a good metaphor for instructors’ roles in online classes. To our online students, the learning management system can be a foggy and cloudy place to navigate. Instructors organize their courses differently and use different communication structures. Some instructors teach primarily through asynchronous means while others use synchronous avenues exclusively. Within a learning management system, content and assessment can be distributed across a multiple of links to click and pages to view. They’re not always easy waters to navigate.

But online teachers can serve as sunstones and show the way. At our institution, we conducted a survey of students enrolled in online classes in Fall 2016 and Spring 2017. With over 700 undergraduate and graduate students participating in the study, we are starting to identify some clear way that instructors can help their students navigate online classes. Here are some takeaways for our research.  To help students navigate their online classes, instructors need to:

  • Provide clearly stated course learning objectives
  • Clearly identify course policies and expectations
  • Provide regular and clear communication with students
  • Link assessments to course learning objectives
  • Engage online students with their peers

In our study, each of these teacher actions was significantly correlated with students’ perceptions of the quality of their online experience (p < 0.05). We’re still analyzing the data and looking for other trends but one thing is clear, online instructors play a critical role in reducing the chaos of online classes. Like the sunstone that Vikings used to sail during foggy times, the instructor serves as the light in the clouds and can help students successfully navigate their online classes.