The AI in Classes Experiment Pt. 1 

Author(s): Brandy A. B. Perkl, Ph.D. - Feel free to use/adapt with credit! | Originally posted: July 13, 2023

1) My Official AI Use Course Policy (via UArizona)

In this course you are welcome and expected to use generative artificial intelligence/large language model tools, e.g. ChatGPT, Dall-e, Bard, Perplexity. Using these tools aligns with my teaching goal of training leaders for our current and future society.

Citation Example

OpenAI. (2023). ChatGPT (July 2023 version) [Large Language model]. https://chat.openai.com/share/e5416379-c4a3-47d2-8cb9-2a603d1569b4 

Reflective Comments Example

Reflecting on my creation of this page. 

AI Acknowledgement Section: No site copy for this page was generated using AI, however, ChatGPT was consulted regarding when students should be penalized for using it in college courses. https://chat.openai.com/share/e5416379-c4a3-47d2-8cb9-2a603d1569b4  

In my own words vs. the University words: You can use AI on ANY assignment, but ONLY with active transparency. 

Active transparency means: Noting briefly (either via a citation or reflective comments) how and when you used AI in your process. 

My hope: Is that we will help each other learn when AI is helpful, how to use it ethically, and when it is useless or even harmful. To do that, we have to disclose our usage. As a leadership student, you may also want to include a brief critique* of how well it did at 'helping' you complete your work so we all learn together what to be wary of and when to use it with minimal concern.

Possible penalties I hope I never to need to use: I reserve the right to impose a significant penalty for the unreflective reuse of material generated by AI tools and to then assign zero points for merely reproducing the output from AI tools without citation, reflection, or critique. 

My pledge: However, I am committed to being fair and maintaining transparency in my grading, so this would be an opportunity for discussion with the student if I exercise the penalty rights noted above. (I’m hoping we won’t have a problem in this regard but want to make sure that the expectations are clear so that we can spend the semester learning things together—and not worrying about the origins of your work. If any part of this is confusing or uncertain, please reach out to me for a conversation before submitting!)

2) Why AI at all?? Because it's an emerging skill.

Learning to use AI is an emerging skill, and those who develop competencies in this skill will likely be sought out over those who do not for future employment (particularly those with 'prompt engineering skills'; Visé & Klar, 2023).  

Learning to use technology appropriately is so important for leaders. So I encourage you to think of the help you can get from these tools as a much less sophisticated version of the assistance you can receive (for free!) from a Think Tank tutor. That tutor might legitimately ask you a question to jump-start your imagination, steer you away from the passive voice, or help identify a poorly organized paragraph, but should never do the writing for you. A major difference here, of course, is that an app is not a person. It’s a machine adept at recognizing patterns and reflecting them back at us. It cannot think for itself. And it cannot think for you.  As society grapples with when and how using these tools is or is not acceptable, we want to be ahead of these curves if possible and be influencing these conversations around us as leaders - to do that though - we have to experiment with the tools ourselves!

To that end, I will provide basic tutorials or optional prompts you can leverage for our work in some cases. However, it's very important to consider the ethics of use and the limits of it as well. I am happy to talk/meet/email re: how to use these tools in 1:1 meetings or after live sessions with you. Please let me know if you have any questions about the use of AI in this class - I'm figuring it out with you as we go. 

Wait, step back - what the heck are AIs and Generative AIs?

Generative AI technologies (GAIs) are any artificial intelligence technology that can produce or summarize digital content (e.g., text, images, video, audio) from a prompt. Current examples include ChatGPT, Bard, Perplexity, the Google Docs “Help me Write” feature, Dall-E, and Midjourney. Other common AIs students use include Grammarly, Google Maps, etc. and other support features provided by a server vs. a human. 

If you have no idea how to get started but want to try, I recommend this page: https://libraryhelp.sfcc.edu/Chat-GPT/ (Note: Bing incorporates a GAI that can be used without making an account and works similarly to ChatGPT! So if you do not want an account but still want to try it out, use Bing instead.)

Important Caveat: Be aware that other classes will have different policies and some may forbid AI use altogether! When prohibited or not in compliance with a course's policy, usage could result in violations of the Honor Code and have actional consequences at the class, college, or University level.  

Ideal Class Uses

Potential Best Practices: 

Mentally prepare an argument for the appropriateness of the use of AI in the case you used it for, as this is the kind of thing we've seen come up in many real-world leadership scenarios right now as people figure out how these tools will change the world of work moving forward. You should have a reason ideally (even if the reason is personal - i.e. you struggle to get started when writing). 

Don't 'copy' OUT of the AI tool, instead, use your interactions as a learning experience to inform your work. If you do 'copy,' be sure to cite the copied text in quotes, because it is a quote of work you did not produce. (This can ensure you don't slide over the ethical line into misconduct.) 

This is a good guide: https://docs.google.com/document/d/1p4mXxBG9BMDovp72RMjXpJDaghX_HyJNLD_7sOHZPqg/edit 

Limits

Potential Best Practice: Treat AI-based assistance the same way you treat collaboration with other people - as a tool for understanding that will be influenced by the person's views/biases -  this type of collaboration with a human would be cited and transparent when used for understanding concepts but is not generally need to be cited for mechanical improvements (refining grammar, citations, etc.). Same for AI collaboration.

Ethics

Due to all of the limits noted and the fact that we're all kind of figuring out how AI will work for us and what the rules are in real-time - transparency of usage and disclosure is important! 

Materials produced by a GAI are from previously published work - though it doesn't cite it for you. Thus, using it without (1) verification, (2) disclosure, and (3) citation can easily be seen as in violation of academic honesty policies and viewed as unethical by others. Typically, leadership students do not want to be seen as unethical. Notes on how to cite ChatGPT, which apply to other GAI, can be found in the References section of this page.

Potential Best Practice:

 Have an AI Acknowledgement Section! Disclose use of any support tools in it inc. Grammarly, Google Docs, GAI, etc. which acknowledge usage and how. Examples: 

3) Don't Surrender Your Voice

In leadership over the past decade+, there has been a continual push for more transparent, authentic leaders. These leaders are often most distinguishable and prized for their 'voice'. We often value our leaders for helping us see and share in a goal and illuminating how to create the path to that goal. We want their words to feel REAL and ALIVE to us and help us to see shared visions. That is difficult for an AI to do, at least so far, so I encourage you not to surrender your voice to save time + effort. 

Excerpt from Prof. Loewe of St. Edward's University's Policy for Ethical Use of Generative AI Technologies... (Loewe, 2023)

"...Maybe GAIs will give you a useful suggestion, organizational idea, or other help, but their outputs, especially in well-established genres, are often bland pabulum—or worse. 

GAIs such as Chat GPT and Bard are trained on text found online. You already know from using the Internet that much of what appears online is wrong, banal, or generated by copycats; is glib brand-building fluff, clickbait trash, or political hype; or is written in a voice-of-the-committee style. 

As a result of the training data, some GAI output is like an OK-looking but ultimately unsatisfying (or even slightly gross) gumbo made from mystery ingredients. 

GAIs are interesting tools that can help you improve your writing in some ways, but you retain both the privileges and the responsibilities of a human being who can make choices in using words..."

Note that the higher the AI involvement, particularly in critical thinking tasks where you need to be able to differentiate junk from quality - which requires an existing knowledge base - the risk of usage increases as you go along. Fairly low risk exists in asking for proofing to Standard Edited Academic English; increasingly higher risks of inaccuracies, hallucinations, biases, and loss of stylistic impact (voice!) increase as assistance increases.

...that you might want to try/be aware of: You may want to start with this directory for finding tools to help you do a number of different tasks: https://www.canopydirectory.com/...

Acknowledgements

While I did not end up using GAI to create this page, I use Grammarly consistently when preparing course materials (particularly to attempt to be more concise and clear - I am wordy by nature + trade).  

Much of my text for this page was adapted from text provided in the document below by other professors who generously support one another in current efforts to adapt to AI, and I welcome other professors to do the same with my work as well:

Eaton, L. (Ed.). (n.d.). Classroom policies for AI Generative Tools. Crowd-Sourced Classroom Policies for AI Generative Tools. https://docs.google.com/document/d/1RMVwzjc1o0Mi8Blw_-JUTcXv02b2WRH86vw7mi16W3U/edit?fbclid=IwAR1J1sSmlMv6YStCrn25JJINaEHcCHWbz9Trm1Vw-ot-xOf1tNIRjrvI70M

References

Doctorow, Cory [@Doctorow]. (2023, June 15). The internet is increasingly full of garbAIage, much of it written by other confident habitual liar chatbots, which are now extruding plausible sentences at enormous scale [Image attached] [Tweet]. Twitter. https://twitter.com/doctorow/status/1669401889031753731?ref_src=twsrc%5Etfw  

Loewe, D. M. (2023, July 15). Policy for ethical use of Generative AI Technologies. https://docs.google.com/document/d/1onwUP12kIqcU2-s-xjEMY-UJ4cWf-8xApCE3gxTcQB0/edit 

McAdoo, T. (2023, April 7). How to cite chatgpt. American Psychological Association. https://apastyle.apa.org/blog/how-to-cite-chatgpt <-- HOW TO CITE CHATGPT! Though my preference would be to use their new option to share a link to your prompts vs. a generic link to ChatGPT, as I feel that's more transparent, i.e. https://chat.openai.com/share/ef0a23e0-9384-4319-8b33-99f76a71bb6b 

Visé, D. de, & Klar, R. (2023, April 18). Nine in 10 companies want employees with Chatgpt Skills. The Hill. https://thehill.com/policy/technology/3955384-ai-employees-companies-chatgpt-skills/ 

Image sourced from Adobe's Stock (licensed via UArizona). | University  of Arizona Privacy Statement