A Case Study of Cross-Lingual Zero-Shot Generalization for Classical Languages in LLMs

Abstract

Large Language Models (LLMs) have demonstrated remarkable generalization capabilities across diverse tasks and languages. In this study, we focus on natural language understanding in three classical languages—Sanskrit, Ancient Greek and Latin—to investigate the factors affecting cross-lingual zero-shot generalization. First, we explore named entity recognition and machine translation into English. While LLMs perform equal to or better than fine-tuned baselines on out-of-domain data, smaller models often struggle, especially with niche or abstract entity types. In addition, we concentrate on Sanskrit by presenting a factoid question–answering (QA) dataset and show that incorporating context via retrieval-augmented generation approach significantly boosts performance. In contrast, we observe pronounced performance drops for smaller LLMs across these QA tasks. These results suggest model scale as an important factor influencing cross-lingual generalization. Assuming that models used such as GPT-4o and Llama-3.1 are not instruction fine-tuned on classical languages, our findings provide insights into how LLMs may generalize on these languages and their consequent utility in classical studies.

Publication
Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics, July 2025
Hrishikesh Terdalkar
Hrishikesh Terdalkar
Researcher

My research lies in the intersection of Computational Linguistics, Natural Language Processing, and Graph Databases with a particular emphasis on low-resource languages such as Sanskrit and other Indian languages. I am committed to pioneering innovations that have a real-world impact. My interests also include Artificial Intelligence, Databases, Human-Computer Interaction, Information Retrieval, and Data Mining.