Federated learning for affective computing tasks
2022 10th International Conference on Affective Computing and …, 2022•ieeexplore.ieee.org
Federated learning mitigates the need to store user data in a central datastore for machine
learning tasks, and is particularly beneficial when working with sensitive user data or tasks.
Although successfully used for applications such as improving keyboard query suggestions,
it is not studied systematically for modeling affective computing tasks which are often laden
with subjective labels and high variability across individuals/raters or even by the same
participant. In this paper, we study the federated averaging algorithm FedAvg to model self …
learning tasks, and is particularly beneficial when working with sensitive user data or tasks.
Although successfully used for applications such as improving keyboard query suggestions,
it is not studied systematically for modeling affective computing tasks which are often laden
with subjective labels and high variability across individuals/raters or even by the same
participant. In this paper, we study the federated averaging algorithm FedAvg to model self …
Federated learning mitigates the need to store user data in a central datastore for machine learning tasks, and is particularly beneficial when working with sensitive user data or tasks. Although successfully used for applications such as improving keyboard query suggestions, it is not studied systematically for modeling affective computing tasks which are often laden with subjective labels and high variability across individuals/raters or even by the same participant. In this paper, we study the federated averaging algorithm FedAvg to model self-reported emotional experience and perception labels on a variety of speech, video and text datasets. We identify two learning paradigms that commonly arise in affective computing tasks: modeling of self-reports (user-as-client), and modeling perceptual judgments such as labeling sentiment of online comments (rater-as-client). In the user-as-client setting, we show that FedAvg generally performs on-par with a non-federated model in classifying self-reports. In the rater-as-client setting, FedAvg consistently performed poorer than its non-federated counterpart. We found that the performance of FedAvg degraded for classes where the inter-rater agreement was moderate to low. To address this finding, we propose an algorithm FedRater that learns client-specific label distributions in federated settings. Our experimental results show that FedRater not only improves the overall classification performance compared to FedAvg but also provides insights for estimating proxies of inter-rater agreement in distributed settings.
ieeexplore.ieee.org
Showing the best result for this search. See all results