
"From the beginning of our current AI-saturated moment, I leaned into ChatGPT, not away, and was an early adopter of AI in my college composition classes. My early adoption of AI hinged on the need for transparency and openness. Students had to disclose to me when and how they were using AI. I still fervently believe in those values, but I no longer believe that required disclosure statements help us achieve them."
"Fess up. Confess. That's the problem. Mandatory disclosure statements feel an awful lot like a confession or admission of guilt right now. And given the culture of suspicion and shame that dominates so much of the AI discourse in higher ed at the moment, I can't blame students for being reluctant to disclose their usage. Even in a class with a professor who allows and encourages AI use, students can't escape the broader messaging that AI use should be illicit and clandestine."
Some college composition courses initially required students to submit AI disclosure statements when using generative AI, with faculty encouraging transparency and experimenting with ChatGPT. That requirement intended to uphold openness but produced inconsistent reporting. Many students submitted AI-assisted work without the required disclosures, prompting faculty confusion. Mandatory disclosure increasingly resembles a confession, triggering reluctance amid a broader culture of suspicion and shame around AI in higher education. Even in permissive classes, students fear broader negative messaging and thus avoid disclosure. Colleagues reported similar noncompliance, leading to reconsideration of required disclosure statements.
Read at Inside Higher Ed | Higher Education News, Events and Jobs
Unable to calculate read time
Collection
[
|
...
]