Not an Algorithm

Recently, I’ve been thinking a lot about how we teach writing, problem-solving, and scientific experimentation. In most cases, there’s an attempt to present this stuff in a step-by-step process to help demystify their complicated natures. For example, a lot of teachers teach a five-paragraph essay structure to lead students in their writing. In science, many teachers teach the scientific method as a linear process that involves forming a hypothesis, conducting experiments, making observations, and drawing conclusions. These are algorithmic approaches to these complex tasks. They attempt to formalize messy processes by presenting them in more procedural ways.

But, writing and experimentation aren’t formulaic. Sure, writing and science have rules and conventions that guide their practices, but good writing doesn’t come from a formula. Very few experiments are conducted following the step-by-step scientific method. Teaching this stuff in a formulaic way doesn’t fairly represent the complex nature of these tasks.

A better approach would be to teach these processes as heuristics. If you’ve been reading this blog for a long time, you may remember that I wrote about algorithms and heuristics way back in 2012. At the time, I was reflecting on conversations that were happening on our campus about writing instruction. In that original post, I wrote: “Students need to know that writing doesn’t follow a simple formula or equation. Writing is an organic process that is informed by practice and guided by strategies. People become better writers not only by understanding conventions and grammar rules but by writing in different genres and gaining real experience with the art and craft of writing. Heuristics can help guide the developing writer and foster a better sense of what writing as a process is.”

Jump ahead a decade, and now I’m thinking about how we’re teaching prompt writing for genAI tools. I’ve been reading a lot of books on AI in education, and I’ve seen a ton of example prompts that people have shared as a way to instruct others on how to write their own. In almost every book I’ve read on genAI, the pages are filled with prompt examples. Prompt examples aren’t cookbook recipes, though. If I follow a recipe in a cookbook, I can be reasonably certain the finished product will closely resemble the goal. I can’t imagine a world where I’d follow a recipe to bake a chocolate chip cookie and accidentally make a coconut macaroon instead. Using an example prompt, however, produces a different outcome every time it is used. Using genAI is not using Google or searching within a document. It is generative, which means it will create something new every time it is used. I can use the exact same prompt multiple times in a row and get very different responses. And, sometimes the response can resemble a metaphorical coconut macaroon. The genAI response is so far away from my goal that it can be laughable.

To combat this, we need to teach strategies that people can apply to their prompt writing. For example, check out this resource from the Center for Excellence in Teaching and Learning at the University of Connecticut. While it provides different example prompts, the resource also includes detailed strategies for developing good prompts and considerations to guide prompt writing. If you’re really interested in stepping up your prompt writing (or how you teach it), check out this getting started guide from Harvard University.

To hear a lively conversation about this topic (and last week’s post), give a listen to the Science In Between podcast episode that drops tomorrow. 

The Words We Use

I know I’ve been writing about generative artificial intelligence (genAI) a lot recently, but I think my level of focus reflects the impact the technology is having on teaching and learning. Technology folks have been shouting about different disruptive tools and concepts for years, but I believe genAI is the most disruptive force in my 30+ year teaching career. So, yes. I’m writing another genAI post this week.

I have facilitated a lot of genAI professional development with educators over the last 18 months. During that time, I’ve led sessions on the basics of genAI, how to write better prompts, how genAI tools can help people be better researchers, and how to navigate the ethical land mines that genAI poses. Inevitably, before the end of any workshop, an attendee will ask how they can make their assignments or assessments “AI-proof.” They want solutions to keep students from using AI to take what they perceive as “shortcuts” in their classes. I totally understand this perspective. A lot of educators share these concerns. I did a quick Google search for “AI-proof assignment” and “AI-resistant assignment” and found a lot of content that educators may find valuable. I won’t share those links here, though. This post is actually about something different.

I worry about the language we use as educators when discussing our reactions to generative AI (genAI). Using phrases like “AI-proof” or “AI-resistant” communicates that this technology is something to be feared and avoided, as if artificial intelligence were a viral pandemic sweeping through education. This terminology suggests that with the right amount of instructional planning or creativity, we can quarantine our classrooms from a perceived plague of artificial intelligence, creating an unrealistic expectation that we can (or need to) entirely shield our students from AI’s influence.

This fear-based language also overlooks the potential benefits of integrating AI into our teaching practices. By framing AI as an enemy to be resisted, we miss opportunities to explore how it can enhance learning, personalize education, and prepare students for a future where AI will be ubiquitous. Instead of adopting a defensive stance, we should focus on understanding AI’s capabilities and limitations, teaching our students to critically evaluate and effectively use AI tools. A proactive approach will better equip them to navigate a world increasingly influenced by AI, fostering adaptability and innovation rather than resistance and avoidance.

The language we use when discussing AI in education is important. Rather than framing AI as a threat, we should adopt a balanced perspective that recognizes both the challenges and opportunities genAI presents. If we really want to talk about revising our assignments and assessments in the wake of genAI, I offer “AI-conscious” or “AI-reflective” as more neutral terms. These terms recognize the impetus to change, without demonizing the technology in the process.

Exploring the Myth of the AI First Draft

I listened to the Teaching in Higher Ed podcast recently and heard an episode with Leon Furze, an educational consultant from Australia who is also the author of the book, Practical AI Strategies, Practical Reading Strategies and Practical Writing Strategies (2024). Furze studies the implications of generative artificial intelligence (GenAI) on writing instruction and education. In the episode, Furze references a blog post he wrote a few months ago that challenged the use of GenAI tools like ChatGPT or Microsoft Copilot to help people write first drafts. If you’ve listened to any GenAI company marketing their technology, you’ve probably heard that the tools can help with brainstorming, outlining, and tackling the dreadful “blank page.” GenAI tools, the proponents argue, can offer some assistance to get started with the difficult task of writing. Furze has some concerns with this approach.

In his post, Furze outlines a few reasons to “be cautious of the AI first draft.” Furze’s first reason is pretty esoteric but important. Furze worries about capitalism, oppression, and the larger impacts on literacy and expression. He introduces a term called the “computational unconscious” which posits that technology and technology companies have created an invisible infrastructure that impacts human thought, communication, and interaction. It’s heady (and scary) stuff.

While I share Furze’s concerns about the “computational unconscious,” I’d prefer to dig into one of his other reasons here. Furze worries that GenAI can undermine the purpose of writing. He writes:

“The purpose of writing isn’t just to demonstrate knowledge in the most expedient way. Writing is to explore knowledge, to connect and synthesize ideas, to create new knowledge, and to share. When students use AI to generate a first draft, they skip 90% of that work, creating something that may well be worth sharing, but which has not in any way helped them form and make concrete their own understanding.”

This rationale resonates with me on a bunch of levels. As a writer, I realize how difficult this process is. But I also realize the benefits. In a 2012 blog post, I shared my reasons for writing this blog. I wrote:

I need to write to learn. I usually have a bunch of ideas swirling around my head. Blogging forces me to connect my thoughts in a coherent way and make sense of the sometimes disparate concepts. It’s not true for everyone, I’m sure. But writing gives me the opportunity to solidify my thoughts and learn.   When I start writing a blog post, I usually have a general idea of the subject matter and some of the points I want to make. After getting started, however, I may end up in an entirely different place because the writing process leads me to consider my thoughts in a new way.”

For me, writing helps me make sense of my ideas and construct my understanding of things. Using GenAI to help me write a first draft would rob me of that sensemaking. Sure, it would be easier and more efficient, but it would be taking the learning from me. It’s like how Terry Doyle writes in his book Learner-Centered Teaching (2011), “the one who does the work does the learning.” If GenAI is doing all the heavy lifting, then I’m not doing the learning. And that’s one of the main reasons I write.

A GenAI Smorgasbord

This weekend in the United States, the country celebrated Memorial Day, which honors military personnel who died while serving. The holiday is usually filled with parades, services, and other public events to showcase the contributions of veterans. While the day is designed to be a recognition of the military, Memorial Day also serves as the unofficial start of summer and ushers in a wave of picnics, barbecues, pool parties, and vacations. In the spirit of these summer activities, I’m offering this “GenAI Smorgasbord” with a buffet of content for your perusal.

Teaching with AI, with José Bowen: This is an episode from the Teaching in Higher Ed podcast where the author/educator José Antonio Bowen discusses how professors can bring tools like ChatGPT into their collegiate classrooms. It’s a great listen. Bowen is the author of Teaching Naked (2012) and Teaching Change: How to Develop Independent Thinkers using Relationship, Resilience and Reflection (2021). After listening to this podcast episode, I purchased Bowen’s new book (Teaching with AI: A Practical Guide to a New Era of Human Learning) which I anticipate will inform a blog post or two down the road. I’ll keep you posted.

Thinking with and About AI, with C. Edward Watson: This is another Teaching in Higher Ed podcast. It actually came an episode before José Bowen, but I listened to them out of order. Watson is Bowen’s co-author on the Teaching with AI book. Since I’ve been thinking (and writing) about the ethical issues around genAI for the last year or so, I really enjoyed this episode. Especially how Watson unpacks his concerns around AI detection related to student cheating.

Exploring New Horizons: Generative Artificial Intelligence and Teacher Education: This is a relatively new e-book from the Association for the  Advancement of Computing in Education (AACE). Edited by Mike Searson, Elizabeth Langran, and Jason Trumble, the book focuses on different aspects of genAI as they relate to teachers and teacher preparation. My dean and I led a faculty reading group around the book  the past few weeks and we’re going to use it as a guide for further conversations with our colleagues in the fall.

Four Singularities for Research: This post, written Ethan Mollick, examines different ways that genAI will impact academic research in the future. Mollick is a business professor/researcher and offers four “singularities” to demonstrate genAI’s impacts. Mollick defines a singularity as a “future point in human affairs where AI has so altered a field or industry that we cannot fully imagine what the world on the other side of that singularity looks like.” For those of us who work in institutions of higher education, the singularities are eye opening (and potentially troubling).

Occupational Heterogeneity in Exposure to Generative AI: This is a research article a colleague shared from the Social Science Research Network (SSRN). The researchers sought to “assess which occupations are most exposed to advances in AI language modeling and image generation capabilities.” The results suggest “that highly-educated, highly-paid, white-collar occupations may be most exposed to generative AI.” The research offers an additional glimpse of what the future of genAI-related work may look like.

No Silver Bullet

Last week, I wrote about my reservations with AI detectors. I’ve stopped using them to determine whether text has been human-generated or written by an AI tool like ChatGPT. The AI detectors give lots of false positives and falsely flags text written by non-native English writers more frequently. Those tendencies have raised enough doubt about the reliability of AI detectors to recommend that educators stop using them.

Some people reading may be wondering what we should use instead. If an educator suspects that a student has submitted something produced by a generative artificial intelligence (GenAI) tool like ChatGPT, what should they do? As the title of this post suggests, there isn’t an easy answer. I wish I could say with confidence that someone could use a single tool or site and it would detect whether some text was human or AI-generated. Since every response produced by a GenAI tool is unique, the detectors don’t work like plagiarism checkers do. Plagiarism checkers work by comparing text against a database of written documents. Plagiarism checkers are determining whether some submitted text closely matches stuff that already exists. AI detectors, however, work by looking for patterns in written language. While those patterns can suggest whether some text is AI-generated, they are not conclusive.

While better detection tools don’t currently exist, here are some suggestions to stave off student academic dishonesty with GenAI tools.

1. Revise your syllabi. When ChatGPT was first introduced, I wrote a post that offered resources for including more clear and transparent syllabi language in a ChatGPT world. Provide specific guidance for what GenAI use is allowed and prohibited in your course. Provide examples of things that are acceptable (and not) and outline the specific penalties for infractions.

2. Have frank conversations with your students. Including language in your syllabus will set up the policies and expectations of your classroom, but that only outlines the legislative side of things.  need to talk with your students about the educational expectations you have for them. Why are your assignments important? Why is the content you’re teaching critical for their program? For their careers? For their lives? What impact would a GenAI shortcut have on their development as learners? Approaching this from an educational perspective may resonate with some students.

3. Revise your assignments and grading policies. In 2015, I led a faculty learning community around James Lang’s book Cheating Lessons: Learning from Academic Dishonesty (2013). While the book was written long before the explosion of GenAI, it listed four classroom features that pressure students to cheat: an emphasis on performance, high stakes riding on the outcome, extrinsic motivations for success, and low expectations for success. Redesigning assignments and revising grading policies to reduce these aspects can potentially impact students’ choice to use GenAI to cheat.

4. Integrate GenAI intentionally. Rather than ban GenAI tools in my classroom, I’ve integrated them at different points to help students revise their writing, search for research articles, and generate ideas for paper topics. Intentional use of GenAI also means discussing the potential pitfalls with students. GenAI tools often hallucinate and they can create biased responses. These factors should inform (and limit) the use of GenAI tools.

5. Collect lots of student writing samples. So far, I’ve only offered advice on what educators can do to reduce GenAI-related academic dishonesty in their classrooms. But how do educators determine whether a student has presented an AI-generated work as their own? While I don’t have a silver bullet for being able to determine this type of academic dishonesty, I will recommend that educators have students submit multiple writing assignments throughout a course. By collecting these student writing samples, an educator can have data to compare a suspicious submission against. Much like an AI detector that is looking for patterns, you could look for patterns in a student’s writing and compare it against things they have already submitted. While this practice won’t tell you definitively whether a student has cheated, it can provide evidence to prompt a conversation with a student.