As Artificial Intelligence becomes increasingly advanced and integrated into our daily lives, it is crucial that we have clear and comprehensive policies in place to govern its use and ensure ethical practices.
This is a representative sample of faculty responses to the question “If you have successfully integrated use of ChatGPT into your classes, how have you done so?” in a 2023 Primary Research Group survey of instructors on views and use of the AI writing tools. A few other responses of note were “It’s a little scary,” “Desperately interested!” and “I’m thinking of quitting!”
However, while the arrival of GPT-4 is undoubtedly exciting, it also highlights a concerning issue that many institutions and faculty members are still lacking AI policies.
When ChatGPT appeared to be the most sophisticated AI writing tool in the college-writing landscape—only a couple of weeks ago—faculty were abuzz with conversation about how to design assignments that could evade the software. How to distinguish machine writing from human writing and how to protect students from AI’s sometimes disturbing replies.
Then came GPT-4.
“The old version from a few months ago could be a solid B student,” said Salman Khan, founder of Khan Academy, an American nonprofit association focused on creating online educational content for students. “This one can be an A student in a pretty rigorous program.” Khan’s nonprofit is working on an AI assistant that seeks to ensure students do most of the work.
Without AI policies, there is a risk that AI technology will be used in ways that violate privacy, perpetuate bias, or even harm individuals. It is essential that universities and other institutions work together to create policies that address these issues and ensure that AI is used responsibly and ethically.
You may like these posts: