Resumen: The accurate prediction of lexical relations between words is a challenging task in Natural Language Processing (NLP). The most recent advances in this direction come with the use of pre-trained language models (PTLMs). A PTLM typically needs “well-formed” verbalized text to interact with it, either to fine-tune it or to exploit it. However, there are indications that commonly used PTLMs already encode enough linguistic knowledge to allow the use of minimal (or none) textual context for some linguistically motivated tasks, thus notably reducing human effort, the need for data pre-processing, and favoring techniques that are language neutral since do not rely on syntactic structures. In this work, we explore this idea for the tasks of lexical relation classification (LRC) and graded Lexical Entailment (LE). After fine-tuning PTLMs for LRC with different verbalizations, our evaluation results show that very simple prompts are competitive for LRC and significantly outperform graded LE SoTA. In order to gain a better insight into this phenomenon, we perform a number of quantitative statistical analyses on the results, as well as a qualitative visual exploration based on embedding projections. Idioma: Inglés DOI: 10.18653/v1/2023.acl-long.308 Año: 2023 Publicado en: Proceedings of the conference - Association for Computational Linguistics. Meeting 1 (2023), 5607--5625 ISSN: 0736-587X Financiación: info:eu-repo/grantAgreement/ES/AEI/PID2020-113903RB-I00 Financiación: info:eu-repo/grantAgreement/EC/HORIZON EUROPE/101057332/EU/Design-based Data-Driven Decision-support Tools: Producing Improved Cancer Outcomes Through User-Centred Research/4D PICTURE Financiación: info:eu-repo/grantAgreement/ES/MINECO/RYC2019-028112-I Tipo y forma: Article (Published version) Área (Departamento): Área Lenguajes y Sistemas Inf. (Dpto. Informát.Ingenie.Sistms.)
Exportado de SIDERAL (2024-02-07-14:40:24)