TOTUS TUUS

The Rosary College integrated humanities student blog

AI and the Classical Mind, Part 1: To Use or Not to Use, That is the Question

greek vase depicting a blacksmith

ARTICLE INFO

The articles in this series were written in lieu of Rosary College’s seminar on April 8, 2025, titled “Artificial Intelligence: Forming Minds and Souls In the Age of Machines.” Students were invited to ask the speakers questions, and these reflections resulted as dialogues with both the main talks and the speakers’ answers.

There are plenty of things that make life easier and are very helpful: electric drills are much easier to use than a normal screwdriver; texting is faster than writing letters; the “control+find” shortcut on computers is faster and easier than trying to scan an online article to find a specific sentence. Arguments could be made that these things are not good for people, since the latter are not working and developing their minds, muscles, patience, etc., but these things are used all the time and not really considered bad by most people.

Does Artificial Intelligence not, at times, basically fall in the same categories as some of these? If Artificial Intelligence, for example, collects data, which is then reviewed and checked by people who think for themselves and can better tell what is true and what is false, what is helpful and what is not, then is it not operating basically just like an advanced “control+find”? There are different opinions on this. This essay’s intent is to examine a few of these opinions, namely, those of Dr. Michael Gonzalez and those of Dr. Michael Shick, and to glean its own answer from these, as well as from Plato’s Gorgias and Aristotle’s Rhetoric.

In Rosary College’s recent seminar on Artificial Intelligence (AI), Dr. Michael Gonzalez, Executive Director of the Institute for Constitutional Thought and Leadership, gave a talk on AI in education. He began the talk by quoting Plato’s Phaedrus: “Their trust in writing, produced by external characters which are no part of themselves, will discourage the use of their own memory within them” (275a). Dr. Gonzalez followed this up a little later by saying that “(t)here’s a general caution among the ancients in the face of technology… They don’t simply assume that technological progress or optimization is also the advance or benefit of human beings” (14:00). He said that there is a presumption in modern times that technological progress and human progress “tend to go together or can go together” (15:33). He said also that “we are beholden, in a sense, to the technologies we use… as Thoreau puts it in Walden, ‘men have become the tools of their tools.’ There’s a way in which technology can use its users” (16:08).

Dr. Gonzalez made the argument that technology is a “crutch.” He compared the use of technology, such as AI, to a crutch that, if one were to use it too much, especially when he did not need it, would “atrophy” his mind, for “if you use crutches when you don’t need them, your legs will atrophy… the muscles in your legs will no longer be strong… You can lose the function by relying on the help, in other words” (17:24). After saying this, Dr. Gonzalez laid out his two claims: one narrow and one broad. His broad claim was that “even something that makes your life easier, that makes it more expeditious… can also be bad for you” (18:06). His narrow claim is that “generative AI… the AI that produces text in response to prompts… makes life easy by providing a crutch for your intellect” (18:35). This, stated Dr. Gonzalez, was his core claim for the talk. Generative AI, he said, encourages one to skip the part of writing called “Inventio,” the discovery of arguments or the proper wording, method, style, etc., and thus deprives one of the chance to develop that “muscle.”

As to the question from above, whether AI falls in the same category as other tools, though he never answered it explicitly, one can deduce his likely answer from what he did say. One question that he answered was why AI was different from cars, which one could argue are bad for humans since, in general, they give people less opportunity to build the muscles to walk long distances. Dr. Gonzalez admitted that, following the same logic, cars are probably not good for humans; however, he said that “I’d be more willing to accept that kind of physical cost and try to offset that than a kind of cost on my rationality” (1:37:35).

If one were to apply this logic to the initial question from the beginning of this essay, one might deduce that Dr. Gonzalez would say that the difference between AI and an electric drill is that AI atrophies one’s intellectual muscles while an electric drill atrophies his physical muscles. As for the “control+find” function, as it is closer to atrophying the intellectual muscle of scanning and picking through a source, it is harder to tell how Dr. Gonzalez would compare it with AI.

Having examined Dr. Gonzalez’s view on AI, it is now time to look at Dr. Shick’s view. Dr. Michael Shick, the president of Rosary College, was the second speaker of the AI seminar. His outlook focused on AI as a tool. He referenced a saying early on in his talk that was very similar to some of the points Dr. Gonzalez made: “Don’t let the tail wag the dog” (34:32). He summed this up, saying, “ultimately what that means is don’t let the means or the tool force the decision” (34:44). Dr. Shick said that it is the responsibility of “human beings to decide on what… actually makes sense, what is rational, and to use the tool to help you get to where you need to go” (34:54).

AI, he said, is a tool to help people make decisions, not make the decisions for them. One should not take it at its word without any attempt to verify, “(t)aking them at their recommendations and discerning towards the action forward is more prudent” (37:40). He emphasized that, as one would not take another person’s word at face value, neither should he take AI’s word at face value. The crux, Dr. Shick said, of his suggestion is “use it as a tool—understanding that it’s likely fundamentally flawed—validate what it states, and then adjust from there. No one should be using Artificial Intelligence to do the thinking for themselves” (42:19). He then agreed that using AI as a crutch will cause one to atrophy and his ability to critically think will digress.

So, what did Dr. Shick say in answer to this essay’s primary question? Again, Dr. Shick did not explicitly answer it, but one can deduce that he would likely apply the same points he made in his talk to this question. To a certain extent, AI is a tool like the examples from the question, and since it is a tool like them, one should use it as a tool. A drill does not tell its user what screw to screw into a piece of wood, likewise AI should never be the one making the decisions. Since it is easier to let AI be the guide than to let an electric drill direct the hand of the operator, it is all the more important that the user of AI “trust, but verify,” as Dr. Shick said repeatedly in his talk.

Now, for the final part of this essay, Aristotle and Plato must come forward. Obviously, these two ancient philosophers never spoke of Artificial Intelligence, but that does not mean they cannot help men find an answer to the problem. One can find a simple answer to the problem of AI in Plato’s Gorgias. In the dialogue, Socrates engaged in his typical occupation of dialectic, asking questions to expose weak arguments and to find the truth. He even said in the dialogue to Callicles that if he “happened to have a soul made of gold…, don’t you imagine I’d be well pleased to find one of those stones people rub against gold to test it, the best one, so that if I went ahead and applied my soul to it, and it confirmed to me that it had been nurtured in a beautiful way, I’d know for sure at that point that I was in good enough shape and had no need of any further test?” (486D). Socrates made the point here that one cannot consider an argument solid and trustworthy until he has tested it. Once one has applied this to AI, it falls in line with Dr. Shick’s view that AI is a tool and its user has a responsibility to verify, not simply trust.

Aristotle, in his book Rhetoric, said that the argument against rhetoric that “someone using such a power with speeches might do great harm,” “applies in common to all good things except virtue, and most of all to the most useful things” (1355b.3-5). Aristotle said here that one can abuse any tool, good though it may be. Now, one can imagine that Aristotle might have more reservations about AI, as it does indeed try to usurp the intellect’s operations, which results in underdeveloped intellects. Though Aristotle said that any tool can be subject to abuse, a key difference between rhetoric and AI is that the art of rhetoric builds muscles in the mind, whereas AI often serves as a crutch that leads to the atrophy of the intellect. If one uses AI with the greatest care to not abuse it and use it as a crutch instead of thinking for himself, perhaps Aristotle would have condoned it, but, at the very least, he would have taken the same stance as Dr. Shick.

In conclusion, whether or not one believes men can use AI responsibly, verifying everything it provides, it is clear that no one should ever take AI at its word. People should take everything that AI provides with a grain of salt, testing and checking it against other sources or their own rationality. Trust, but verify.

Leave a Reply

Your email address will not be published. Required fields are marked *