THE PHILOSOPHY OF THE «LEARNED IGNORANCE»: LINGUISTIC AND CONCEPTUAL ASPECTS OF HALLUCINATIONS IN LARGE LANGUAGE MODELS

Authors

DOI:

https://doi.org/10.32782/2410-0927-2025-23-2

Keywords:

Artificial Intelligence (AI), Large Language Models (LLM), linguistic analysis, hallucianations, philosophy, docta ignorantia

Abstract

Large Language Models (LLMs) like GPT-4, Claude, Gemini, and PaLM have demonstrated remarkable linguistic capabilities but suffer from a critical flaw: hallucinations – confident, yet unfounded, responses. These fabrications arise when models generate plausible-sounding information without a factual basis. Despite technical advances, the root issue remains unresolved: LLMs do not recognize when they lack knowledge. This paper explores this phenomenon through the linguistic and philosophical lens of docta ignorantia («learned ignorance»), a concept introduced by 15th-century thinker Nicholas of Cusa. Cusa argued that true wisdom begins with recognizing the limits of one’s knowledge. Applying this idea to AI, the paper contends that LLM hallucinations stem from their lack of epistemic humility – they «do not know when they do not know.» Rather than acknowledging uncertainty, they fabricate linguistically correct answers, potentially spreading misinformation and undermining trust in AI systems. The paper outlines several key contributions. First, it examines docta ignorantia and its relevance to epistemology and modern AI. Second, it analyzes the linguistics and technical causes of hallucinations in LLMs, such as probabilistic text generation and lack of grounded understanding. Third, it illustrates how existing mitigation strategies – like confidence calibration and retrieval augmentation – only simulate awareness of ignorance but do not resolve the underlying epistemological gap. Ultimately, this work calls for AI that mirrors a foundational principle of wisdom: understanding its own limits. By drawing on docta ignorantia, we can reimagine hallucination not just as a technical glitch but as a philosophical failure – one that can be addressed by rethinking how AI engages with the unknown.

References

Bengio, Y. (2023, May 7). AI scientists: Safe and useful AI? Yoshua Bengio. https://yoshuabengio.org/2023/05/07/ai-scientists-safe-and-useful-ai/

Cusa, N. (1440/1981). On learned ignorance (J. Hopkins, Trans.; 2nd ed.). Minneapolis, MN: Arthur J. Banning Press. https://dl1.cuni.cz/pluginfile.php/1019097/mod_resource/content/1/On%20Learned%20Ignorance%20by%20Nicholas%20of%20Cusa%2C%20translated%20by%20Jasper%20Hopkins.pdf

Detommaso, G., Bertran, M., Fogliato, R., & Roth, A. (2024). Multicalibration for confidence scoring in LLMs (arXiv:2404.04689). arXiv. https://arxiv.org/abs/2404.04689

Hopkins, J. (2020). Nicholas of Cusa. In E. N. Zalta (Ed.), The Stanford Encyclopedia of Philosophy (Fall 2020 Edition). Stanford University. https://plato.stanford.edu/entries/cusanus/

Hopkins, J. (Trans.). (2001). Complete philosophical and theological treatises of Nicholas of Cusa. Banning Press.

Jones, N. (2025, January 21). AI hallucinations can’t be stopped – but these techniques can limit their damage. Nature, 637(8047), 778–780. https://doi.org/10.1038/d41586-025-00068-5

Klapper, S. (2025, March 24). Beyond Turing: The next test for AI. Discourse Magazine. https://www. discoursemagazine.com/p/beyond-turing-the-next-test-for-ai

Kumar M, Mani U, Tripathi P, et al. (August 10, 2023) Artificial Hallucinations by Google Bard: Think Before You Leap. Cureus 15(8): e43313. doi:10.7759/cureus.43313

Liu, H., Xue, W., Chen, Y., Chen, D., Zhao, X., Wang, K., Hou, L., Li, R., & Peng, W. (2024). A survey on hallucination in large vision-language models. arXiv. https://arxiv.org/abs/2402.00253

Liu, Q., Chen, X., Ding, Y., Xu, S., Wu, S., & Wang, L. (2025). Attention-guided self-reflection for zero-shot hallucination detection in large language models (arXiv:2501.09997). arXiv. https://doi.org/10.48550/arXiv.2501.09997

Ma, S., Wang, X., Lei, Y., Shi, C., Yin, M., & Ma, X. (2024). “Are you really sure?” Understanding the effects of human self-confidence calibration in AI-assisted decision making (arXiv:2403.09552). arXiv. https://doi.org/10.48550/arXiv.2403.09552

Miller, Clyde Lee, «Cusanus, Nicolaus [Nicolas of Cusa]», The Stanford Encyclopedia of Philosophy (Summer 2025 Edition), Edward N. Zalta & Uri Nodelman (eds.), forthcoming URL = <https://plato.stanford.edu/archives/sum2025/entries/cusanus/>.

Rao, S., & Ramstad, A. (2023, December 21). Legal fictions and ChatGPT hallucinations: ‘Mata v. Avianca’ and generative AI in the courts. New York Law Journal. https://www.law.com/newyorklawjournal/2023/12/21/legal-fictions-and-chatgpt-hallucinations-mata-v-avianca-and-generative-ai-in-the-courts/

Sun, Y., Sheng, D., Zhou, Z., & et al. (2024). AI hallucination: Towards a comprehensive classification of distorted information in artificial intelligence-generated content. Humanities and Social Sciences Communications, 11, 1278. https://doi.org/10.1057/s41599-024-03811-x

Trotolo, F., Ahmed, A., Hayat, H., & Hayat, D. (2025, April 16). Retrieval-Augmented Generation (RAG): Bridging LLMs with external knowledge. Walturn. https://www.walturn.com/insights/retrieval-augmented-generation-(rag)-bridging-llms-with-external-knowledge

Wikipedia contributors. (2025, May 11). Hallucination (artificial intelligence). Wikipedia. https://en.wikipedia.org/wiki/Hallucination_(artificial_intelligence)

Xu, Z., Jain, S., & Kankanhalli, M. (2024, January 22). Hallucination is inevitable: An innate limitation of large language models [Preprint]. arXiv. https://doi.org/10.48550/arXiv.2401.11817

Downloads

Published

2025-12-30

How to Cite

БІСКУБ, І., & ЛЕВАНДОВСЬКИЙ, В. (2025). THE PHILOSOPHY OF THE «LEARNED IGNORANCE»: LINGUISTIC AND CONCEPTUAL ASPECTS OF HALLUCINATIONS IN LARGE LANGUAGE MODELS. Current Issues of Foreign Philology, (23), 12–21. https://doi.org/10.32782/2410-0927-2025-23-2