Skip to content

A novel framework which is specifically designed to use a PLM's knowledge in structure and semantic modelling in multi-party conversations to perform depression detection.

License

Notifications You must be signed in to change notification settings

KUAS-ubicomp-lab/ProDepDet

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

24 Commits
 
 
 
 
 
 

Repository files navigation

What's New?

July 2024: ProDepDet wins the Honorable Mentioned Award at the 2024 IEEE CIS Student Grant Competition in Computational Intelligence in Biomedicine and Healthcare for the proposal entitled Vishadha: Early Smart Depression Detector.

ProDepDet

ProDepDet is a framework which is specifically designed to use the knowledge of a pre-trained language model (PLM) in structure and semantic modelling in multi-party conversations to perform depression detection which is an unseen out-of-domain task. To our knowledge, this study is the first attempt to adapt the acquired knowledge of a PLM for out-of-domain task modelling using prompt tuning (PT)-based cross-task transferability.

The Approach

Prompt Transferability across Different Tasks drawio

The main contributions are:

  • A novel method is proposed for enhancing out-of-domain task transferability of PT.
  • A soft verbalizer is introduced along with a soft PT template for PT transferring for the first time.
  • Multiple downstream tasks including depressed utterance classification (DUC) and depressed speaker identification (DSI) are used to evaluate the generalization and interpretability of the novel methods.

System Design

IJCNN 2024 Full Paper Design drawio

Settings

Python 3.8 and PyTorch 2.0 were used as the main programming language and machine learning framework, respectively . We separated MPC data into three categories based on the session length such as Len-5, Len-10, and Len-15 and used three different prompt lengths (l) such as 25, 50, and 75. Hyper-parameters were used such as GELU activations, Adam optimizer, with learning rate 0.0005, warmup proportion 0.1, and frozen model hyper-parameters, θ1 and θ2 both True.

Baseline Models

We adopted several pre-trained models and large language models as source frozen baselines. To evaluate DUC , we used WSW, BERT, RoBERTa, SA-BERT, MPC-BERT, and DisorBERT as pre-trained models. For the evaluations of DSI , WSW, BERT, RoBERTa, ELECTRA, SA-BERT, MDFN, and MPC-BERT were used as pre-trained models. GPT-3, ChatGPT, and GPT-4 were adopted as large language models to evaluate both DUC and DSI .

Datasets

Experimental Results

Evaluation results of DUC in terms of R10@1 which denotes the first correctly classified depressed utterances from 10 candidates. Ablation results are shown in the last two rows.

DUC

Evaluation results of DSI in terms of F1 score. Ablation results are shown in the last two rows.

DSI

About

A novel framework which is specifically designed to use a PLM's knowledge in structure and semantic modelling in multi-party conversations to perform depression detection.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages