Four Energies

It’s been a busy week or two as I’m preparing to host a graduation party for my son. Throw in a few summer classes and it’s been kind of hectic. So, I’m running this rerun from October 2021. Enjoy!

Last week, a colleague sent me a link to a post from Austin Kleon’s blog. If you’re not familiar with Kleon, he describes himself as a “writer who draws.” He’s also the bestselling author of Steal Like An Artist and other books. While I’m relatively new to Kleon’s awesomeness, I totally understand why my colleague sent me the link. He’s an awesome writer. And artist.

The post that my colleague sent referred to “The Four Energies” that authors tap into when they write. Kleon’s post actually references a Jane Friedman newsletter who pulled the concept from a book by Bill O’Hanlon. In his post, Kleon shares this quote from O’Hanlon:

“In my view, there are four main energies you can tap into when you write your book. The main writing energy you discover may be just one or you may find that you have a combination of more than one of these energies that fuels your writing endeavors. The four energies are Blissed, Blessed, Pissed, and Dissed. The first two represent the positive energies; the last two, the negative.”

According to O’Hanlon, we’re motivated to write by what we love and by what upsets us. Our inspiration is derived from some combination of Blissed, Blessed, Pissed and Dissed energies. On his post, Kleon describes each of these energies:

  • “Blissed” energy comes from what you’re on fire for and can’t stop doing.
  • “Blessed” means you’ve been gifted something that you feel compelled to share.
  • “Pissed” means you’re pissed off or angry about something.
  • “Dissed” means you feel “dissatisfied or disrespected.”

My colleague sent me the post with the simple message, “When I read your blog, I usually know which energies are motivating you.”

But I wonder whether everyone can. So, let me unpack the energies from last week’s post. Reading that post, one could probably divine that I was angry over something that was happening (I was!). While I wouldn’t go as far to say that I also felt dissatisfied or disrespected, there was a slight undercurrent of being “dissed.” The personal reflection of my earliest political demonstration probably emanated from a mixture of feeling blissed and feeling blessed, but I can’t say for sure. Sometimes, I’m in the moment and things connect in my head and I just go with it. It’s hard to conjure up those same energies a week or two out.

I also wonder whether these same energies apply to other forms of writing. I spent a good portion of this morning sending emails, and I can say with confidence that a handful were definitely derived from the “pissed” and “dissed” energies. Recognizing that, I should probably put some more positive energy into the world today. For what it’s worth, this post definitely originated from a “blessed” energy. It’s always cool to learn something new and to feel seen by those around you.

Mounds of Papers and Feedback

I’m bogged down with lots of grading and stuff, so I’m pulling out this post from the 8 Blog archives. This post was originally written in March 2022 but it still captures a lot of what I’m navigating today. Enjoy!

I’ve been navigating the perfect storm of the semester for the last few weeks. I don’t know how it happened, but my courses aligned perfectly (or not so perfectly) so that the students in all of my classes were submitting major assignments at the same time. I probably should chalk it up to poor planning on my part, but I’ve been spending a lot of time providing feedback on student papers and their revised drafts.

If you’ve been reading this blog for a while, you know that I’ve written about feedback a bunch over the last eleven or twelve years. Personally, I subscribe to Wiggin’s Seven Keys to Effective Feedback which outlines that for feedback to have an impact on student learning it must: be goal-referenced; be tangible and transparent; be actionable; be timely; be ongoing; be consistent; and progress towards a goal. I know that’s a lot to address, but basically it means doing a whole lot more than writing “Good job!” on a student’s paper. It involves setting a clear target for students and providing clear, actionable feedback to help students work towards the target. Reflecting on the feedback I provide to my students, I feel like I meet this standard pretty consistently.

The frustrating part for me is that sometimes I’ll provide feedback to students and won’t see that feedback addressed in future revisions. For most of the assignments in my classes, I allow my students to revise and resubmit their papers for better grades. And while I’ll provide detailed feedback on ways to improve their work, I won’t always see that feedback appear in students’ revisions. Some colleagues have advised that I shouldn’t grade those papers where students have ignored my feedback. Others have suggested that I have students write a revision audit outlining how they’ve specifically addressed the feedback I’ve given in their revision. If you’ve ever submitted an article for publication, this type of audit is common after receiving feedback from reviewers. Despite my frustrations with some of the revisions my students submit, I’ve avoided incorporating these policies. Honestly, they just sound like additional barriers that students need to circumvent to revise their work. I want my students to revise their work. But I also want them to incorporate the feedback I provide.

This morning, I was scrolling through my Twitter feed and came across an article from ASCD titled Getting GREAT at Feedback. Initially, I didn’t think I’d find anything groundbreaking, but then I read the byline to the article: “The key to feedback is how it is received by the student.” That prompted more reading. In their article, the authors, Douglas Fisher and Nancy Frey, offer a unique perspective on feedback. They write:

“However, a common misunderstanding is that it’s all about the amount of feedback given—and the more the better. But the key is actually how the feedback is received by the learner. The relationship between the person giving the feedback and the one receiving it is paramount in terms of how much ‘gets in.'”

To better address how feedback is received by students, Fisher and Frey offer a different feedback model that “forges trust, helps the hearer sense a positive motive, and is clear and informative.”  Drawing on work by LarkApps, they offer the GREAT model for effective feedback. I know, it’s kind of a corny acronym, but the dimensions are pretty thoughtful.

  • Growth-oriented: The delivery signals one’s intention as constructive, focused on improvement not criticism.
  • Real: Feedback is honest, targeted, and actionable (showing the speaker’s grounding in the area in question), not vague or false praise.
  • Empathetic: It combines critique with care and a quest for mutual understanding.
  • Asked-for: The speaker encourages the receiver to ask questions and seek more feedback, after offering brief comments.
  • Timely: It’s delivered soon after the task or learning is demonstrated. Feedback gets stale fast.

Reading through this list, it may sound similar to Wiggin’s Keys to Effective Feedback. From my point of view, however, the main difference is the intentional focus on care, empathy, trust, and relationship building, which makes a lot of sense. People are more likely to receive advice and feedback from people they trust. Although I’ve never done anything to make my students distrust me, I also haven’t explicitly attended to these areas in my feedback, either. If I believe teaching is about relationship building (which I do!), I have to apply that mindset to all areas of my work, including the feedback I provide.

Correcting the Record

This year marks my fifteenth year writing this blog. Since the beginning days of this space, I’ve tried to maintain a few core principles about the things I write (and have written). While I won’t get into all of those principles in this post, the one that I’ve honored since the start is “Don’t rewrite history.” When I write a post, it captures my thoughts, emotions, and working environment at that moment. Every post represents a snapshot in time. If I revisit a post a year or two down the road, I avoid rewriting or editing the post. Sure, I may correct a spelling or grammar mistake that I may have missed earlier, but I won’t rewrite the content or change the meaning or intent of the original post. By adopting this principle, I wanted to honor my past self (and my thinking at the time) and give my future self something to look back on. Through fifteen years of blog posts, I can’t remember revising the intent or meaning of a single post.

Today, I’ve decided to revise two posts from the past. Obviously, I’m doing this after great reflection, but I believe it is warranted. Looking through a few old posts on ChatGPT, and other generative artificial intelligence (GenAI) tools, I made claims about GenAI detectors that I’d like to revise. I don’t want people coming upon one of the past posts and having it inform their practice. So, I’ve added disclaimers on a couple old posts. Here’s the issue.

I’ve stopped using AI detectors because I don’t trust them. I’ve read a bunch of reports that say the detectors are giving false positives, which means they falsely identify a person’s writing as being AI-generated. While this is concerning in itself, more concerning is the fact that the tools more often flag writing from non-native English writers as being AI-generated. Take this study from researchers at Stanford University. The researchers collected 91 human-written essays from a Test of English as a Foreign Language (TOEFL) repository. They report:

“(The detectors) incorrectly labeled more than half of the TOEFL essays as “AI- generated” (average false-positive rate: 61.3%). All detectors unanimously identified 19.8% of the human-written TOEFL essays as AI authored, and at least one detector flagged 97.8% of TOEFL essays as AI generated. Upon closer inspection, the unanimously identified TOEFL essays exhibited significantly lower text perplexity. ” (Liang et al, 2023, pg. 1).

In response to this, I’ve joined other institutions (like Vanderbilt University) to advise my colleagues to refrain from using AI detectors to examine student writing for AI-generated text. I’ve also added disclaimers on any previous posts which suggests that AI detectors can be used in this way. It’s a deviation from my past practices and my core principles of this blog, but I believe it is warranted.

The next logical question that some folks may have is “What do I do if I suspect a student is presenting AI-generated text as their work? If I can’t use an AI detector, what can I do?”

I’ll dig into that topic in next week’s post.

References:
Liang, W., Yuksekgonul, M., Mao, Y., Wu, E., & Zou, J. (2023). GPT detectors are biased against non-native English writers. Patterns, 4(7).

Feedback on Feedback

When I was a student, I have to say that the feedback I received from my teachers was inconsistent at best. Often, my teachers would write “Good job!” on a paper and just put a grade on it. I had one teacher who would circle random things on my papers in red ink, add some check marks here and there, and then add a grade without any substantive rationale for how that grade was earned. I once asked my high school English teacher, Mr. Wilson (a pseudonym), how I could improve a paper I had written. Mr. Wilson simply responded, “Write better.”

As a teacher, I’ve tried to offer better feedback to my students. I use rubrics to offer concrete standards for assessment and use those standards to provide actionable feedback for students to improve. I want to remove the mystery from my assessment. Whenever possible, I work to mix in progress and discrepancy feedback so students know what they’re doing well and specific ways they can improve. Assessing student work and providing good feedback are time-consuming tasks. But when I feel bogged down in the work, I think back to my tenth-grade self, walking out of Mr. Wilson’s classroom. I try to remember how clueless and powerless I felt. It helps me remember why it’s important to provide good feedback.

Even though I’ve been teaching for more than thirty years, I still get feedback from others. Some of that feedback comes from student evaluations and peer observations on my teaching. Some of that feedback is actionable. Some of it is less so. But, I still try to reflect on the feedback I receive and consider ways that I can improve my instruction. Beyond teaching, though, I also receive feedback on my scholarly work. Despite the many hats I wear, I’m still engaged in academic research and I’m still submitting proposals for presentations and publications. For the last ten months, a colleague and I have been shipping around a manuscript that we co-authored. It’s sometimes hard to find the right home for some work and that’s been the case with this manuscript. After submitting the manuscript to one journal, we received a rejection email without any substantive rationale. Another journal had the manuscript in review for six months anyone without ever providing any acknowledgment or feedback. The manuscript sat in limbo until we decided to pull it from review. Throughout the process, I felt a little like I was back in Mr. Wilson’s classroom. We weren’t getting the feedback we needed.

Before abandoning the manuscript, my colleague and I decided to try another journal. After submitting the manuscript for review in January, we received in-depth feedback from four different reviewers within a month. While the manuscript wasn’t accepted for publication, the feedback was so thoughtful and specific. All four reviewers provided actionable feedback that will improve the work. Beyond identifying areas that need to be rewritten, the reviewers offered perspectives that are helping us broaden our analysis. The reviewers also encouraged us to resubmit the manuscript after edits were made. The feedback has motivated us to continue working on the manuscript and we’re actively revising and rewriting it now.

That’s what good feedback should do. It should provide specific ways to improve and new ways to approach work. It should be specific and actionable, but also be encouraging.

My only hope is that’s what my feedback does for my students.

More Feedback with GenAI

If you’ve been reading this blog for a bit, you may remember that I’ve been experimenting with using generative artificial intelligence (GenAI) to support my students’ writing. Last fall, I wrote several posts on my efforts using ChatGPT to provide feedback on drafts that my students wrote. My goal was to use GenAI to provide some immediate corrective feedback to students on things like grammar, spelling, and formatting which would allow me to focus on higher-order feedback. If you’re interested in that journey, check out these blog posts from the fall:

Feedback with ChatGPT
Discussing Feedback with ChatGPT
Responding to ChatGPT Feedback

My graduate students submitted their first writing assignments last week and I incorporated GenAI into the writing and revision process again. This time, I chose to use Claude to provide feedback. Claude allows users to upload documents the the tool can then interact with. Last fall, my students had to copy and paste segments of their paper into ChatGPT to get feedback. With Claude, the students could upload an entire PDF of their paper and have the tool interact with the whole document. ChatGPT (and other GenAI tools) offer file upload functionality in their pay versions, but I didn’t want my students to pay for a tool. So, after playing around with Claude, I decided to implement the tool in this iteration of the activity.

To provide some support and guidance with using Claude, I offered the following prompts (and recorded a short video demonstrating the process):

  • Acting as a writing tutor, can you offer suggestions for improving the structure and organization of this paper without revising it?
  • Acting as a writing tutor, can you offer suggestions for improving the grammar and word usage of this paper without revising it?
  • Acting as a writing tutor, can you offer suggestions for improving the in-text citations of this paper without revising it? Please follow the APA 7th Edition guidelines.
  • Acting as a writing tutor, can you offer suggestions for improving the references of this paper without revising it? Please follow the APA 7th Edition guidelines.
  • Acting as a writing tutor, can you offer suggestions for improving the paper formatting of this paper without revising it? Please follow the APA 7th Edition guidelines for a student paper.

You may notice that the prompts have become more specific than the one I offered to students last fall. After my students submitted their revised drafts last semester, I analyzed their common mistakes and the types of feedback that ChatGPT gave them. I realized that the prompts needed to be more specific to better address the errors my students were making. To chart their revision process, I had students submit their first draft, a copy of Claude’s feedback, and their revised drafts. This gave me a window into the overall feedback process and how it informed my students’ work.

Here are some takeaways from this iteration of GenAI feedback:

  1. Claude mostly gave indirect feedback. Unless prompted for more specifics, Claude tended to give indirect feedback on students’ writing. For example, for one student’s paper, Claude provided this feedback: “Review verb tenses. Some sentences shift between present and past tense. Using past tense consistently for summarizing and discussing studies is appropriate.” This type of indirect feedback is helpful to students, but it requires that students can recognize the difference between verb tenses.  Students tend to respond better to direct feedback, where specific errors are identified. Looking at the Claude feedback that my students submitted, some asked follow-up questions seeking specific examples of some of the issues. I’ll probably include more guidance with asking follow-up questions next time.
  2. Claude made some mistakes. This shouldn’t be a huge surprise, but Claude didn’t always get it right, especially with APA formatting. The most common mistake that Claude made was with the number of authors to cite for a work with three or more authors. While the reference includes all of the names of the authors (up to 20), the citation only includes the first author, followed by an “et al.” Claude consistently bungled this.
  3. My students sought and appreciated the feedback. Despite the mistakes that Claude made and the type of feedback it offered, my students responded favorably to the activity. While I suggested that students could use whichever prompts they felt they needed, most of them used all five prompts to receive feedback on their papers. I also surveyed the students after this exercise and asked how likely they were to choose to use an AI tool again to guide their writing and revisions. Almost all of them said they planned to use GenAI to support their work. For example, one student wrote, “I would likely use an AI tool again to guide my writing and revisions. I feel that, as long as I double check that its recommendations align with formatting guidelines, it doesn’t really hurt to have another set of (artificial) eyes look over my paper.” It’s clear that students saw the benefits and also recognized the limitations of AI-generated feedback, at least concerning their own learning.
  4. Student use did not always translate to their practice. All of the students in this graduate class are teachers. While they work in different settings and environments, they all support their own students’ learning. When asked whether they planned to use GenAI with their students, half of the respondents did not foresee using it. As one student wrote, “I do not think that I would use an AI tool in my classroom. To be frank, I’m afraid that students would use this technology to do the work for them, and not to just assist them.” While my students saw the benefits to their own learning, they weren’t ready to incorporate GenAI tools into the classroom.