The Language-Based Assessment Model (L-BAM) Library: Open Model Sharing for Independent Validation and Broader Applications

Abstract

Language-based assessments (LBAs), quantitative estimates of scientific constructs based on language, have advanced methods in the psychological and social sciences for over a decade. LBAs based on individuals’ prompted descriptions analysed with large language models to produce scores of their psychological states and traits have shown strong convergence with the corresponding rating scales (r > .80) and have often surpassed rating scales in predicting theoretically relevant behaviours (external criteria). Despite their high validity across numerous psychological outcomes and contexts, the broader adoption of LBA models has been limited. Even when made available alongside research publications, these models often remain inaccessible due to technical complexities, inconsistent documentation, and the absence of a standardized repository. This tutorial introduces a framework targeted to social and psychological scientists for accessible sharing models with others –the Language-Based Assessment Models (L-BAM) Library– as well as a toolkit for easily using L–BAMs via the text package in R. L-BAM covers a wide range of models for assessing mental health disorders (e.g., depression, anxiety), well-being (e.g., satisfaction with life, harmony in life), implicit motives (need for power, affiliation and achievement) and more. The L-BAM library aims to increase the availability and resource efficiency of language-based assessments of psychological constructs while encouraging replication, independent validation and the broad application of pre-existing language-based assessment models.

Publication
Advances in Methods and Practices in Psychological Science