- Large Language Models (LLMs) have attracted interest due to their language understanding capabilities. However, the use of their d-vectors has hardly been explored in the field of mental health despite the potential of LLMs. In this paper, we evaluate embeddings extracted from SBERT, RoBERTa and Llama-2
models to detect negative thoughts on three realistic datasets (PatternReframe, Armaud-Therapist and Cognitive-Reframing) with different numbers of samples. Experimental results show that when we train our second classifier with the embeddings extracted from the Llama-2 models, the W-F1 and M-F1 scores improve for those datasets with a larger number of samples
(PatternReframe and Armaud-Therapist datasets). In contrast, on the smallest dataset, the best performance is achieved by SBERT. These results highlight the capabilities of LLMs and their learned internal representations and encourage exploration in other domains.