Applying transfer-learning on embeddings of language models for negative thoughts classification

  • Large Language Models (LLMs) have attracted interest due to their language understanding capabilities. However, the use of their d-vectors has hardly been explored in the field of mental health despite the potential of LLMs. In this paper, we evaluate embeddings extracted from SBERT, RoBERTa and Llama-2 models to detect negative thoughts on three realistic datasets (PatternReframe, Armaud-Therapist and Cognitive-Reframing) with different numbers of samples. Experimental results show that when we train our second classifier with the embeddings extracted from the Llama-2 models, the W-F1 and M-F1 scores improve for those datasets with a larger number of samples (PatternReframe and Armaud-Therapist datasets). In contrast, on the smallest dataset, the best performance is achieved by SBERT. These results highlight the capabilities of LLMs and their learned internal representations and encourage exploration in other domains.

Export metadata

Statistics

Number of document requests

Additional Services

Share in Twitter Search Google Scholar
Metadaten
Author:Cristina Luna-JiménezORCiDGND, Jonas Jostschulte, Wolfgang Minker, David Griol, Zoraida Callejas
Frontdoor URLhttps://opus.bibliothek.uni-augsburg.de/opus4/122517
Parent Title (English):IberSPEECH 2024, Aveiro, Portugal, 11-13 November 2024
Publisher:ISCA
Place of publication:Baixas
Editor:António Teixeira, Carlos Martinez-Hinarejos, Eduardo Lleida, Dayana Ribas
Type:Conference Proceeding
Language:English
Year of first Publication:2024
Release Date:2025/06/02
First Page:46
Last Page:50
DOI:https://doi.org/10.21437/iberspeech.2024-10
Institutes:Fakultät für Angewandte Informatik
Fakultät für Angewandte Informatik / Institut für Informatik
Fakultät für Angewandte Informatik / Institut für Informatik / Lehrstuhl für Menschzentrierte Künstliche Intelligenz