AI’s Remarkably Imperfect Productivity Tricks Us Into Mistakes. Here’s How You Can Avoid These.
I have observed some embarrassing uses of AI without editing and personifying by fellow educational leaders. It’s a seductive trap. AI can produce what appears to be high-quality work, but when we look closer, there are serious red flags.
Educators need to balance harnessing AI’s potential while preserving integrity of use to avoid concerns of ethical AI use. At the same time, we play a role in modeling for others how AI can enhance rather than undermine.
Challenges of Unedited AI Responses
Recently, I saw a well-intended email commendation sent by a supervisor to a teacher worthy of praise. The problem? It screamed, “I used AI, and didn’t change it.”
Understand that educational leaders should be embracing AI. I certainly do. Yet its remarkably imperfect productivity has the potential to stage uncomfortable call-outs, such as this example when the teacher approached me and said, “That was nice but weird.” That’s a perfect way to sum up AI production!
The integration of AI technology has revolutionized various aspects of education, from personalized learning tools to administrative efficiencies. However, the ethical use of AI-generated content remains a critical concern, particularly in academic settings where integrity and originality are paramount and where educational leaders should be modeling appropriate use.
Consider:
Educational Impact – Students may inadvertently adopt incorrect or incomplete information if AI responses are directly copied. This hinders their critical thinking and learning development.
Legal and Ethical Implications – Educational institutions must navigate the legal and ethical implications of using AI-generated content. Proper attribution and understanding of fair use policies are crucial to avoid legal repercussions.
Best AI Use Practices for Educational Leaders
In May of every school year, I, like many leaders, get barraged by requests to write letters of recommendation. In almost every case, I want to, but it is labor intensive. One of my first experiences tinkering with AI was when I entered non-identifiable resume content to produce a fast, more personified response.
I’m a fast writer but 10 letters at 20 minutes a pop is too great a cost, pulling me from other important duties and personal time with family. When I use AI and get a suggested letter, I then spend 3-5 minutes going back and fixing several important patterns of AI responses, and personalizing where appropriate.
Ultimately, spending 30-50 minutes for 10 letters versus 200 minutes is well worth the effort before copying, pasting, and sending.
For every input, especially when it’s directed at or about an individual (such as a letter of commendation or recommendation), you should take the time to edit the AI response.
Here are the guidelines I follow to ensure that I balance the efficiency (time saved) and qualitative responsiveness:
1. Use what I call “deliberate feedback” in the AI. One of the first signs that the letter of commendation by the supervisor was exclusively AI generated was that there was nothing in the output that separated out the individual being recognized. It was a generic message acknowledging her achievement. Absent was the “personification” needed in that response. Let’s explore two techniques to ensure personification in the content:
- Old school revision process. We teach students during the writing process how significant editing and revising is. We must follow the same rules with AI. That is, once you get the generated content, go back and manually personalize around the talking points. AI does a good job of organizing content, we must add the personal touch.
- Access deliberate feedback. When I wrote from resume content, I was feeding the AI deliberate information about the individual. This works the same with a method I shared about collecting survey feedback to organize results in a systematic and fast way–I feed the AI the deliberate information and then tell it to use this in the prompt. (“Based on this content, write me a letter of recommendation,” or “Based on these survey feedback responses, identify patterns and trends and make recommendations.”) You can be confident then that the AI response will be very accurate, versus the mistake of saying, “Write me a letter of commendation for a person who presented at a national conference.”
2. Avoid redundancies. AI is designed to please the user. That’s generally great but not when it repeats an explanation over and over again. You will notice this when asking it to respond to a prompt, as the AI will say the same thing, three slightly different ways. We don’t communicate like this in person, so remove the redundancies and stick to the point. Clarity is key.
3. Remove those weird words you don’t use that AI injects. My favorite AI word is “unwavering.” I don’t use that word. It sounds weird coming from my mouth, and there are a bunch of words like this that AI commonly uses. Remember that your voice matters when communicating the message. AI still sounds too mechanical, and even when it can learn your tone, it struggles with strange lexicon. Instead of “unwavering” and “tireless,” of which I have removed dozens of times each, I might simply say, “your dedication to,” or “your hard work.”
While AI offers tremendous benefits in education, its integration must be approached with caution and responsibility. Educational leaders play a major role in fostering a culture of academic integrity and ethical use of AI, and modeling the above matters.
By checking content responses, using deliberate feedback, personifying, and removing redundancies and strange AI wording, we can leverage the power of AI while safeguarding educational integrity, not just for us, but for everyone who looks to us for guidance.
Post Comment