The articles in this series were written in lieu of Rosary College’s seminar on April 8, 2025, titled “Artificial Intelligence: Forming Minds and Souls In the Age of Machines.” Students were invited to ask the speakers questions, and these reflections resulted as dialogues with both the main talks and the speakers’ answers.
At the moment, Artificial Intelligence is about the hottest topic there is. Every industry that’s remotely connected to software and data seems to be trying to find a way to integrate and maximize on it, and everybody seems to be taking a stance about it—positive, negative, or anywhere on an enormous spectrum in between. One of the uppermost questions is how human or inhuman AI is—or has the capability to become. Does it have a soul? Is it going to steal all our jobs? Is it going to take over the world?
Viral reddit questions aside, let’s consider generative AI text software specifically—chatbots, text generators, and AI writers. In a traditional, classical understanding, communication, either written or spoken, is understood to have the transmission of truth through ideas as its goal. The original, unperverted purpose of words is to take an idea you have and communicate the truth of that idea to the person you’re talking (or writing) to. This is a practical concept, but also a metaphysical one: the metaphysical telos of human speech and writing is, essentially, truth and its transmittance.
Does this also apply to words and writing generated by AI? If you’ve ever interacted with a chatbot, especially a popular one, it probably won’t take you very long to see it make some false statements or claims. In theory, that doesn’t refute its potential for truth: people, also, communicate falsely, but that doesn’t mean their communication doesn’t have truth as its end goal. People, through practice and virtue, have the ability to develop and hone this “truth tendency” and make their speech and writing more effective and “truthful”—not just in an “honesty” sense, but in a metaphysical sense. Do AI programs share this ability? Is it possible for a generative AI model—say an LLM—to be programmed and trained to inherently tend towards the formulation and communication of metaphysical truth, like a human intellect?
Generative AI composes responses and content based on data and database access. The model is “trained” on a specific set of data and “taught” to produce content similar to that data, often in response to a user’s input. Many of the mistakes and false statements that people observe in generated content are due to what’s known as an “AI bias.” IBM defines AI bias as “AI systems that produce biased results that reflect and perpetuate human biases . . . Bias can be found in the initial training data, the algorithm, or the predictions the algorithm produces.” Biases occur when a model’s data contains inaccuracies, or when its algorithm is ill-suited to generate the proper content.
For an AI model to have a “truth-tendency,” this bias—or “non-truth tendency”—would need to be removed. The formation of the algorithm and the training data that it receives (which are both human actions, but that’s a little beside the point) need to be constructed in such a way that the model gives the least-biased, most objective, and most “truthful” answers possible.
So is the answer yes? If the biases of an AI-model are removed as much as possible, does the content it generates have that metaphysical telos of truth? On the surface, it seems like it: people, also, hone their ability to communicate truth through the removal of biases, and come to a greater level of objective communication. True, the process is a little different, but AI appears to have the capability to “un-bias” its content and effectively communicate in the same way as people.
But is this true when we dive beneath the surface? Let’s take a look at how the removal of bias in the human intellect works, and see if it’s as similar as it seems.
This idea of truth in communication derives in large part of the philosophy of Socrates in Plato’s dialogues. Socrates describes himself as having been “attached to this city [Athens] by a god . . . as upon a great and noble horse which was somewhat sluggish because of its size and needed to be stirred up by a kind of gadfly. It is to fulfill some such function that I believe the god has placed me in the city” (30e). Socrates sees himself an agent, a “gadfly,” to bring the complacent Athenian people to greater virtue through self-knowledge.
In his dialogues with the citizens, this often comes up in the concept “self-agreement”: Socrates will ask a question and his interlocutor will make a claim (often one that Socrates doesn’t agree with), and Socrates will proceed to show that not only is that claim false, but the interlocutor doesn’t actually believe it himself. This happens in the Gorgias, when the orator Gorgias makes a claim about justice, but then, after examination by Socrates, makes another, contradictory claim. Socrates says:
. . . at the time you said that, I took it oratory would never be an unjust thing, since it always makes its speeches about justice. But when a little later you were saying that the orator could also use oratory unjustly, I was surprised and thought your statements weren’t consistent, and so . . . I said that if you, like me, think that being refuted is a profitable thing, it would be worthwhile to continue the discussion . . . (460e-461a)
Socrates engages in these discussions and refutes the claims of his opponents precisely to encourage their self-knowledge through self-agreement. In other words, he wants to remove their biases so that they can actually mean what they say—so that their words can contain “truth.” When they say something without actually believing it, that, in a sense, is a bias, and Socrates wants to remove it so their intellects—and, by extension, their words—will be more “examined.”
Throughout the Gorgias, Socrates applies this more specifically to oratory and rhetoric. Talking to Callicles about the ideal role of the orator, he says that
So this is what the skilled and good orator will look to when he applies to people’s souls whatever speeches he makes as well as all of his actions . . . He will always give his attention to how justice may come to exist in the souls of his fellow citizens and injustice be gotten rid of, how self-control may come to exist there and lack of discipline be gotten rid of, and how the rest of excellence may come into being there and badness may depart. (504d-e)
Truly good oratory—in contrast to the people-pleasing “flattery” that Socrates condemns—tries to enact real improvement in its hearers. Instead of just telling the audience what they want to hear, the “good orator” tells them what they need to hear—his oratory contains metaphysical, self-agreeing truth. And, as Socrates points out earlier, because it’s “necessary for him [the orator] to know what’s just and unjust,” he’s “necessarily just” himself (460a-c).
Speech and writing are always for other people, and good speech and writing are always for other people’s benefit. To be able to benefit other people (as well as themselves) through “truthful” communication, speakers and writers need to have a certain inclination towards and understanding of the truth that comes through a knowledge of self and bias removal.
Let’s return to AI. When a generative AI chatbot or writing software creates a piece of written content that contains “truth,” it’s doing it because that “truth” is contained in the data it has access to. It’s not doing it because it has any inclination towards or understanding of truth. In fact, that’s impossible—AI is just a complex, multilayered pattern recognition program that spits back generated content based on the construction of its test data. Any “truth” that it communicates is accidental, not essential. It can’t have this purpose of improving its “readers” or enacting benefit through truth in them. An AI model can’t have its biases removed in the same way that a person can: there’s no “disagreement” in it that can be removed in order to achieve a greater state of self-agreement and -knowledge.
The purpose of communication—written or spoken—is to communicate truth and inspire improvement in the hearer (or reader). AI-generated content can only do this accidentally via its test data, not of itself.
Something written by an AI writing software—inasmuch as it’s created by an algorithm instead of a person—is metaphysically inferior to a normal piece of writing. Even if it’s being told what to write, the words and sentences themselves lack a certain interpersonal orientation that comes from being created by an individual for a specific purpose.
That’s a highly abstract argument; what does it mean practically? Imagine you’re writing an article on why your audience should consider reducing their social media usage. If your intentions are correct, your purpose is to enact a benefit by inspiring your readers to live a more holistic, balanced life. Now imagine you have ChatGPT write it for you. Even if the words are exactly the same, ChatGPT can’t have that same purpose, and it can’t relate to your audience in the same way. This strips the essential interpersonal element of communication and instead turns it into a purely utilitarian thing, and it hampers the benefit of both your audience and yourself.
The rise of AI-generated content should be a challenge to us in our own content creation. Are we actually writing and speaking for the benefit of the people we communicate with, or is it just out of necessity, in a utilitarian way? AI can’t write like a person, but a person can write like AI. If we can regain this Socratic vision of self-knowledge, we’ll be one step closer to restoring truth to our communication.
TOTUS TUUS
The Rosary College integrated humanities student blog
AI and the Classical Mind, Part 3: Metaphysical Truth in GenAI Models
ARTICLE INFO
The articles in this series were written in lieu of Rosary College’s seminar on April 8, 2025, titled “Artificial Intelligence: Forming Minds and Souls In the Age of Machines.” Students were invited to ask the speakers questions, and these reflections resulted as dialogues with both the main talks and the speakers’ answers.
At the moment, Artificial Intelligence is about the hottest topic there is. Every industry that’s remotely connected to software and data seems to be trying to find a way to integrate and maximize on it, and everybody seems to be taking a stance about it—positive, negative, or anywhere on an enormous spectrum in between. One of the uppermost questions is how human or inhuman AI is—or has the capability to become. Does it have a soul? Is it going to steal all our jobs? Is it going to take over the world?
Viral reddit questions aside, let’s consider generative AI text software specifically—chatbots, text generators, and AI writers. In a traditional, classical understanding, communication, either written or spoken, is understood to have the transmission of truth through ideas as its goal. The original, unperverted purpose of words is to take an idea you have and communicate the truth of that idea to the person you’re talking (or writing) to. This is a practical concept, but also a metaphysical one: the metaphysical telos of human speech and writing is, essentially, truth and its transmittance.
Does this also apply to words and writing generated by AI? If you’ve ever interacted with a chatbot, especially a popular one, it probably won’t take you very long to see it make some false statements or claims. In theory, that doesn’t refute its potential for truth: people, also, communicate falsely, but that doesn’t mean their communication doesn’t have truth as its end goal. People, through practice and virtue, have the ability to develop and hone this “truth tendency” and make their speech and writing more effective and “truthful”—not just in an “honesty” sense, but in a metaphysical sense. Do AI programs share this ability? Is it possible for a generative AI model—say an LLM—to be programmed and trained to inherently tend towards the formulation and communication of metaphysical truth, like a human intellect?
Generative AI composes responses and content based on data and database access. The model is “trained” on a specific set of data and “taught” to produce content similar to that data, often in response to a user’s input. Many of the mistakes and false statements that people observe in generated content are due to what’s known as an “AI bias.” IBM defines AI bias as “AI systems that produce biased results that reflect and perpetuate human biases . . . Bias can be found in the initial training data, the algorithm, or the predictions the algorithm produces.” Biases occur when a model’s data contains inaccuracies, or when its algorithm is ill-suited to generate the proper content.
For an AI model to have a “truth-tendency,” this bias—or “non-truth tendency”—would need to be removed. The formation of the algorithm and the training data that it receives (which are both human actions, but that’s a little beside the point) need to be constructed in such a way that the model gives the least-biased, most objective, and most “truthful” answers possible.
So is the answer yes? If the biases of an AI-model are removed as much as possible, does the content it generates have that metaphysical telos of truth? On the surface, it seems like it: people, also, hone their ability to communicate truth through the removal of biases, and come to a greater level of objective communication. True, the process is a little different, but AI appears to have the capability to “un-bias” its content and effectively communicate in the same way as people.
But is this true when we dive beneath the surface? Let’s take a look at how the removal of bias in the human intellect works, and see if it’s as similar as it seems.
This idea of truth in communication derives in large part of the philosophy of Socrates in Plato’s dialogues. Socrates describes himself as having been “attached to this city [Athens] by a god . . . as upon a great and noble horse which was somewhat sluggish because of its size and needed to be stirred up by a kind of gadfly. It is to fulfill some such function that I believe the god has placed me in the city” (30e). Socrates sees himself an agent, a “gadfly,” to bring the complacent Athenian people to greater virtue through self-knowledge.
In his dialogues with the citizens, this often comes up in the concept “self-agreement”: Socrates will ask a question and his interlocutor will make a claim (often one that Socrates doesn’t agree with), and Socrates will proceed to show that not only is that claim false, but the interlocutor doesn’t actually believe it himself. This happens in the Gorgias, when the orator Gorgias makes a claim about justice, but then, after examination by Socrates, makes another, contradictory claim. Socrates says:
Socrates engages in these discussions and refutes the claims of his opponents precisely to encourage their self-knowledge through self-agreement. In other words, he wants to remove their biases so that they can actually mean what they say—so that their words can contain “truth.” When they say something without actually believing it, that, in a sense, is a bias, and Socrates wants to remove it so their intellects—and, by extension, their words—will be more “examined.”
Throughout the Gorgias, Socrates applies this more specifically to oratory and rhetoric. Talking to Callicles about the ideal role of the orator, he says that
Truly good oratory—in contrast to the people-pleasing “flattery” that Socrates condemns—tries to enact real improvement in its hearers. Instead of just telling the audience what they want to hear, the “good orator” tells them what they need to hear—his oratory contains metaphysical, self-agreeing truth. And, as Socrates points out earlier, because it’s “necessary for him [the orator] to know what’s just and unjust,” he’s “necessarily just” himself (460a-c).
Speech and writing are always for other people, and good speech and writing are always for other people’s benefit. To be able to benefit other people (as well as themselves) through “truthful” communication, speakers and writers need to have a certain inclination towards and understanding of the truth that comes through a knowledge of self and bias removal.
Let’s return to AI. When a generative AI chatbot or writing software creates a piece of written content that contains “truth,” it’s doing it because that “truth” is contained in the data it has access to. It’s not doing it because it has any inclination towards or understanding of truth. In fact, that’s impossible—AI is just a complex, multilayered pattern recognition program that spits back generated content based on the construction of its test data. Any “truth” that it communicates is accidental, not essential. It can’t have this purpose of improving its “readers” or enacting benefit through truth in them. An AI model can’t have its biases removed in the same way that a person can: there’s no “disagreement” in it that can be removed in order to achieve a greater state of self-agreement and -knowledge.
The purpose of communication—written or spoken—is to communicate truth and inspire improvement in the hearer (or reader). AI-generated content can only do this accidentally via its test data, not of itself.
Something written by an AI writing software—inasmuch as it’s created by an algorithm instead of a person—is metaphysically inferior to a normal piece of writing. Even if it’s being told what to write, the words and sentences themselves lack a certain interpersonal orientation that comes from being created by an individual for a specific purpose.
That’s a highly abstract argument; what does it mean practically? Imagine you’re writing an article on why your audience should consider reducing their social media usage. If your intentions are correct, your purpose is to enact a benefit by inspiring your readers to live a more holistic, balanced life. Now imagine you have ChatGPT write it for you. Even if the words are exactly the same, ChatGPT can’t have that same purpose, and it can’t relate to your audience in the same way. This strips the essential interpersonal element of communication and instead turns it into a purely utilitarian thing, and it hampers the benefit of both your audience and yourself.
The rise of AI-generated content should be a challenge to us in our own content creation. Are we actually writing and speaking for the benefit of the people we communicate with, or is it just out of necessity, in a utilitarian way? AI can’t write like a person, but a person can write like AI. If we can regain this Socratic vision of self-knowledge, we’ll be one step closer to restoring truth to our communication.
FEATURED
AI and the Classical Mind, Part 1: To Use or Not to Use, That is the Question
Introducing the Totus Tuus Student Blog: What’s the Purpose of a “Great Books” Education?
How Modern Entertainment Imitates Religion—And What We Should do About it
POPULAR
How Modern Entertainment Imitates Religion—And What We Should do About it
Morality in Rhetoric, Part 2: Against the Rhetoricians
A Memory Not Forgotten: The Earthly Paradise in Dante’s Purgatorio
EXPLORE | TOTUS TUUS
EXPLORE | ROSARY COLLEGE