Memoir

Artificial Everything

Today I read an article about the assault on higher education being perpetrated by Chat GPT and other AI bots. It was a disturbing article because it seemed far too relatable to my own experience. I was teaching a business ethics course at University of San Diego in November 2022 when Chat GPT was released. That semester was over, so the first semester that Chat GPT was in full swing with the student population was the Spring 2023 semester. Between November 22 and the first week of February, a span of only about 70 days, the AI phenomenon had sunk in enough that I declared at the start of the course that students could use AI, but had to declare that they were using it for the various essays I would be assigning. During the course of that semester, only one student declared that she was using AI to help her write her papers. However, I couldn’t help but notice that the other student papers bore a remarkable similarity to one another in style, theme and even in the use of what would otherwise be rather unusual word choices. Certain words and phrases rang in my ears as I read the papers. This could not have been a coincidence. I was certain that many or most of the students were using AI for their papers. This article I read today claims that 90%+ of student essay submissions are written by AI. The article also highlights the degradation of the morality of students when it comes to using and citing AI in their work. The warning signs of the absence of original thought and work combined with the sense that playing by the rules is unnecessary seems to have taken hold in higher education now that AI has cycled through several generations in the 2.5 years since its auspicious launch. He article warns of the death knell for higher education, but I wouldn’t know because that semester so bothered me that I ended a thirteen year career of teaching at the graduate level and have no regrets about leaving teaching behind…especially the teaching of ethics.

I have seen and feel I understand some of the good things that can come about through the use of AI. I have explained before that I subscribe to Anthropic’s Claude AI program and use it daily in many ways. I still feel it is useful and worth paying a monthly subscription fee for even though both Google and Apple have now imbedded AI into the normal internet search functions, and they do that for free (of for whatever passes as “free” in today’s tech world). I rarely cite Claude as my source or even as my tool when I write since I think of it like citing a spoon for helping me eat soup. I could eat soup lots of different ways, but it’s just easier to do it with a spoon and the spoon doesn’t change the quality of the soup, but does allow it to be eaten with less mess and greater efficiency. I have no doubt that my writing is completely original and that whatever AI adds is supplemental and not so very different from my pre-AI internet search research that I used in my writing. The constructs, thought process and ideas are still my own.

But that is all about using AI for writing. What this article from New York Magazine describes goes way beyond writing. It tells the tale of a young person from Atlanta of South Korean heritage that got accepted to Harvard and then had that acceptance rescinded due to his morally questionable actions during his last semester in high school. He went on to get into Columbia as a transfer student from a community college, where he had perfect grades and supposedly had AI do 100% of his assignments. He went on to get thrown out of Columbia for launching several web services to help students cheat by using AI and then promoting his services on social media with no apologies. Being a computer programmer, he went on to develop other AI tools to effectively cheat his way through almost anything. It seemed very noteworthy that one of his inventions that he built into a service was designed to outsmart the big tech company interviewing process. Apparently, these tech behemoths have developed certain questions and protocols to help them find the exact sort of people they feel can best succeed in their unique and high-powered environment. Well, this whiz kid figured out what they were doing and outsmarted the algorithms by delivering subscribers the optimal answers to those interview questions, thereby greatly improving their odds of being accepted. He himself was offered a job at Amazon by using his own program, but then turned down the offer in favor of building out more apps to help students cheat their way to success.

The article goes on to say that university professors are becoming more and more flummoxed by the AI usage by students, something they claim to be able to sense and feel in the work being presented to them by students. For their part, students are becoming more and more adept at disguising their AI use such that they cannot be discovered and declared as cheaters. AI is actually helping them cheat their way into being secret AI users by tricking out the way in which their work presents itself. The growing awareness on college campuses is that you’re at a great disadvantage if you do not use AI and if you do not lie and cheat about having used it. It does not take a genius to see that this is a slippery slope that is undermining higher education to its core principles. Is it intelligent for artificial intelligence to be allowed to completely supplant native intelligence? Is it intelligent to let artificial intelligence teach us bad ethical lessons and then teach us how to disguise that so that no one knows that we are intellectual cheaters?

I am always reminded of that old 1980’s movie The Last Starfighter. The trailer park kid “wastes” his time playing a video game only to learn that the smart galactic recruiters are using the video to recruit those with the digital skill sets that we on earth do not yet recognize as valuable. Perhaps the people using AI to cheat their way through college are the next generation of productive starfighters and their leaders are the ones sho find ways to make AI serve those cheating ways. But then I don’t remember that trailer park kid being foisted on the horns of a moral dilemma the way college students are. I cannot yet get my head around the long term value of corruption and the absence of moral fiber from our character and lives. And let’s face it, it is all around us, not just in AI, but in social media and an ever-expanding array of elements of our daily lives. Truth and reality have given way to whatever works in the moment and its hard to imagine that anything good will come of that.

I do not think this problem is as unique as AI makes it sound. Since the beginnings of man, there has been a tendency for corruption and moral turpitude to overwhelm society. Wasn’t that the basis for the story of Noah and the Great Flood? Didn’t moral man recognize that the unwashed masses gravitate to artificial everything whenever they are allowed by society to do so? We seem to be heading into such a time, so I guess I need to start carrying a sign that says that the great digital flood is nigh. If only I knew what a cubit was….

3 thoughts on “Artificial Everything”

  1. It’s a thought- provoking piece, Rich. It makes me wonder about the purpose and value of term papers that students write. Perhaps the use of AI will change the requirement for papers and students will need to demonstrate mastery and original thought in other ways.

Comments are closed.