Logic Reloaded: Assessing AI’s Capacity for Human Replacement

Cover Photo: An AI-generated image created by HSP student, Brady Van Oss, depicting ancient philosophers debating the limitations of artificial intelligence.

My interest in artificial intelligence (AI) began with a thought-provoking question from one of my 12th grade students during our Introduction to Philosophy class. He asked, “If we were to create an AI with human-like capabilities, as Aristotle might define as requisite for the immortality of the soul, such as the ability to grasp abstract, universal concepts, then wouldn’t Aristotle argue that we’ve created a new being with an immortal soul?” The straightforward answer would have to be “No”, given Aristotle’s belief that his conclusion about the immortality of the soul was only probable, rather than certain, based on the reasons he gave. Furthermore, the question presupposes the feasibility of endowing AI with complex human traits such as self-reflection, abstract conceptualization, and moral autonomy– a matter still fiercely debated among scientists and philosophers alike. 

Still, the question got me thinking, “What exactly can AI do, or give the impression that it could do, along these lines?” In order to find out, I had to test it for myself.

Image of ChatGPT, a free application that utilizes artiticial intelligence to generate dialogue.

Practical Tests of AI’s Logical Capabilities

Here’s what I discovered: After engaging with ChatGPT, which assured me of its logic-handling capabilities, I subjected it to the same  types of syllogism exercises I give to my senior philosophy students after one month of study. I quickly determined that it was, in fact, shockingly bad at doing logic. The responses it generated were riddled with errors. For instance, when asked for a specific type of syllogism in Aristotelian logic, the example it provided was not only invalid, but also incorrect in its form. Despite multiple attempts to guide it, ChatGPT struggled to identify the rules it had broken. It could not execute exercises in syllogistic logic without continual aid. Essentially, I found myself doing all the intellectual heavy lifting.

ChatGPT’s initial failure prompted me to question its logical aptitude once more, to which it replied:

Yes, I can assist with various aspects of logic. Whether you have questions about logical structures, reasoning, or specific problems, feel free to ask, and I’ll do my best to provide helpful information and guidance.” (Open AI, ChatGPT)

Now, imagine a scenario where a student decides to consult with ChatGPT for assistance with logic. ChatGPT would confidently assure the user that it can provide them with that help. Yet, what both the user and ChatGPT don’t actually know is that ChatGPT is woefully inept with regard to logic.

The Ethical Quandaries of AI Dependence

When considering the things we might want to delegate to AI, it is essential to be aware of its propensity toward error and blatant misinformation. A clear example of this was seen in a recent case where two lawyers submitted six fake cases, along with several fabricated quotes and judicial opinions generated by ChatGPT, in support of their arguments in an aviation injury claim. A federal judge subsequently imposed $5000 fines on the two lawyers for acting in bad faith. Since then, the law firm filed an appeal, insisting that they had acted in good faith and that the blame should be placed on ChatGPT. But isn’t this just scapegoating a mechanism which lacks moral autonomy in order to evade personal accountability? If this is the sort of thing that happens amongst professionals held to a code of ethics, what should we expect from the amateur use of AI? 

My personal experiments further revealed that with ChatGPT, AI can generate convincingly false information while refusing to acknowledge it. In one instance, it generated a poem based on specific criteria I had assigned to it. I was initially impressed by how closely it resembled the poem I provided as a model. Several interchanges later, I asked it to refer back to that poem, which it insisted that it did not write, but rather that it was a poem by John Sterling it had found on the internet, though it specifically told me earlier, “Here’s a poem I wrote…” based on the criteria I had provided. 

ChatGPT apologized for the confusion but adamantly refused to admit that it had actually generated the poem. When I asked it to provide me with sources for evidence that the poem was written by Sterling, it could not. Nor could I find any evidence to that effect on the internet. Either way, ChatGPT either claimed to have produced something that was written by another author (plagiarism) or attributed what it wrote to another author (forgery). Both are forms of fraud and lying. But let us recall that AI is not a morally autonomous subject. It can propagate false information and pass it off as true, but it cannot lie.

HSP students learning chess, a game of logic and reasoning.

The Irreplaceable Value of Human Reasoning

For some people, the question still stands as to whether AI ought to have moral autonomy, assuming that some future technological breakthrough could endow it with the power of “knowing good and evil.” When I challenged ChatGPT with complex moral dilemmas, it always defaulted toward moral pluralism. This ought to be expected since its generated responses are based on patterns contained within the data it received during its training. Thus, having access to anything and everything on the internet, with all things being equal, there can be no universally accepted or objective grounds for moral discernment beyond the vast plurality of views at its disposal. When asked if that were the case, in other words, if it were true that “nothing is absolutely morally binding for all peoples of all times, given the plurality of views on the matter,” ChatGPT instantly generated an 874-word response from which I gleaned this quote: “I think that moral principles or values are neither absolute nor universal.”

That being the case, let us fathom the possibility of a mechanism as powerful as AI assuming full moral autonomy. Presently, ChatGPT-4.0 gives the impression that it is capable of reasoning much faster and better than your average human. It has also provided us with countless examples of where its reasoning is egregiously flawed. Notwithstanding the fact that its ability to execute logical reasoning will improve with training over time, the question still remains as to whether we want to hand over the very things that make us human to a machine. Whether we ought to abdicate our humanity to the mere simulacra of intelligence. 

Taking all of this into account, it’s clear that we can’t leave it to Large Language Models like ChatGPT to do our job for us when it comes to thinking. My engagement with AI reinforced a critical lesson: the irreplaceable value of human reasoning and ethical judgment. Through my experiments, it became evident that the logical tasks AI failed to perform correctly  were identical to those I assigned to my 12th grade students after just one month of logic studies. Such an insight not only bolsters our students’ appreciation for the skills they acquire through their study of logic, but also highlights the importance of nurturing their moral and ethical discernment in a world rapidly disintegrating into moral pluralism. Whatever the future brings with advancements in technology, the cutting edge will always belong to those who can grasp these concepts better than a machine. 


Dr. James “Chip” Stone teaches High School Humanities at Holy Spirit Preparatory School.

(Visited 191 times, 1 visits today)
Comments are closed.
School News & Updates
View the latest Holy Spirit Preparatory School news, updates, and more from here.
Categories