Correcting the Record

This year marks my fifteenth year writing this blog. Since the beginning days of this space, I’ve tried to maintain a few core principles about the things I write (and have written). While I won’t get into all of those principles in this post, the one that I’ve honored since the start is “Don’t rewrite history.” When I write a post, it captures my thoughts, emotions, and working environment at that moment. Every post represents a snapshot in time. If I revisit a post a year or two down the road, I avoid rewriting or editing the post. Sure, I may correct a spelling or grammar mistake that I may have missed earlier, but I won’t rewrite the content or change the meaning or intent of the original post. By adopting this principle, I wanted to honor my past self (and my thinking at the time) and give my future self something to look back on. Through fifteen years of blog posts, I can’t remember revising the intent or meaning of a single post.

Today, I’ve decided to revise two posts from the past. Obviously, I’m doing this after great reflection, but I believe it is warranted. Looking through a few old posts on ChatGPT, and other generative artificial intelligence (GenAI) tools, I made claims about GenAI detectors that I’d like to revise. I don’t want people coming upon one of the past posts and having it inform their practice. So, I’ve added disclaimers on a couple old posts. Here’s the issue.

I’ve stopped using AI detectors because I don’t trust them. I’ve read a bunch of reports that say the detectors are giving false positives, which means they falsely identify a person’s writing as being AI-generated. While this is concerning in itself, more concerning is the fact that the tools more often flag writing from non-native English writers as being AI-generated. Take this study from researchers at Stanford University. The researchers collected 91 human-written essays from a Test of English as a Foreign Language (TOEFL) repository. They report:

“(The detectors) incorrectly labeled more than half of the TOEFL essays as “AI- generated” (average false-positive rate: 61.3%). All detectors unanimously identified 19.8% of the human-written TOEFL essays as AI authored, and at least one detector flagged 97.8% of TOEFL essays as AI generated. Upon closer inspection, the unanimously identified TOEFL essays exhibited significantly lower text perplexity. ” (Liang et al, 2023, pg. 1).

In response to this, I’ve joined other institutions (like Vanderbilt University) to advise my colleagues to refrain from using AI detectors to examine student writing for AI-generated text. I’ve also added disclaimers on any previous posts which suggests that AI detectors can be used in this way. It’s a deviation from my past practices and my core principles of this blog, but I believe it is warranted.

The next logical question that some folks may have is “What do I do if I suspect a student is presenting AI-generated text as their work? If I can’t use an AI detector, what can I do?”

I’ll dig into that topic in next week’s post.

References:
Liang, W., Yuksekgonul, M., Mao, Y., Wu, E., & Zou, J. (2023). GPT detectors are biased against non-native English writers. Patterns, 4(7).

Let’s get phygital!

This weekend, I attended the annual meeting of the American Association of Colleges for Teacher Education (AACTE) that was being held in Denver, CO. During the conference, I attended a bunch of sessions and panel discussions and got to co-present a session with a colleague. I usually come back from conferences with my head thinking about new ideas or concepts and this weekend was no different. I anticipate that I’ll be spending the next post or two unpacking different things I encountered at the conference, but this week, I decided to get “phygital.”

Even though Grammarly is trying to correct me every time I type it, phygital is a word. I heard the word for the first time this weekend and immediately started searching for what the word meant and how it applied to education. The best definition I could find referred to phygital as “a blend of physical and digital environments.” Educationally, this would mean that a phygital space would mix physical, face-to-face learning with some online components. To me, that sounded a lot like other terminology that educators have been using for a while. Blended learning. Hybrid learning. Is phygital just a rebranding of those terms? I didn’t know, so I went to one of my new favorite AI tools (Perplexity) for help. Perplexity acts a lot like ChatGPT but it cites its sources. This helps to reduce the hallucinations and misinformation that some of the other AI chatbots are prone to offer. So, I asked Perplexity, “What does phygital mean in the context of teaching and learning? How does it differ from blended or hybrid learning?” And here’s how Perplexity responded:

In the context of teaching and learning, ‘phygital’ refers to the integration of physical and digital elements to create a seamless learning experience. It involves tightly integrating digital activities into face-to-face sequences to enhance the learning process. Phygital learning aims to bridge the gap between traditional classroom settings and digital technology, allowing for a more engaging and effective learning environment.

Phygital learning differs from blended or hybrid learning in that it goes beyond simply combining traditional and digital learning experiences. While blended or hybrid learning involves a mix of in-person and online instruction, phygital learning specifically focuses on tightly integrating digital activities into face-to-face sequences to create a more dynamic and effective learning environment.”

While this description helped to clarify some things, I was still struggling to see how this was different from blended or hybrid learning. So, I asked a follow-up question. I asked, “Can you provide an example of a phygital learning activity in a science classroom?” Here’s how Perplexity responded:

An example of a phygital learning activity in a science classroom is the use of augmented reality (AR) simulations to conduct virtual science experiments. This approach allows students to engage in hands-on learning experiences by interacting with digital elements superimposed on the real world, such as observing and manipulating virtual chemical reactions or exploring complex scientific concepts through interactive AR applications. By integrating AR simulations into the science curriculum, students can gain a deeper understanding of scientific principles and phenomena while fostering sustained interest and engagement in the subject.”

This made it clearer. In most cases, blended (or hybrid) learning involves either interacting with face-to-face instruction OR online instruction, phygital suggests that we can create learning environments where those traditional distinctions no longer exist. Students can interact with the physical and the digital at the same time, with one enhancing and supporting the other. While I’m still wrapping my head around this new terminology, I think the term “phygital” offers new ways to describe some of the learning experiences on the horizon as the use of virtual reality, augmented reality, and artificial intelligence continues to expand.

References (from Perplexity):

Unlocking the Potential of Phygital Learning and Education
Phygital Learning – Ahead of the Curve
Phygital Learning: Bridging Physical and Digital Education