Speakers
Description
Recent findings indicate that current large language models (LLMs) face difficulties in generating clear-cut, well-motivated definitions in a consistent way. This shortcoming is the consequence of their reliance on opaque data sources and their inherently unstable, non-deterministic outputs. In response, this research aims to develop an LLM-based methodology for producing adjectival microstructures in monolingual dictionaries in a way that is both more consistent and aligned with lexicographic standards. Building on the hypothesis that prompts enriched with contextual information can enhance definition quality, the study employs a graph-based, interpretable, and unsupervised method starting out from static adjectival embeddings. The approach has previously demonstrated the ability to formalize traditional lexical semantic relations, detect adjectival senses from corpus data, and identify the most salient nominal contexts for each sense. The ultimate goal is to integrate these results into practical lexicographic workflows and assess how LLMs, when properly guided, can support dictionary compilation.