Search for contacts, projects,
courses and publications

XAI-PAC: Towards Explainable and Private Affective Computing

People

 

Gjoreski M.

(Responsible)

Abstract

Wearable devices, combined with Artificial Intelligence (AI) methods, can bring significant and sustainable improvements to our lives – from improved patient monitoring and decreased healthcare costs to enhanced sports performance and improved quality of life. Affective computing utilizes the combination of wearables and AI to provide methods for tracking factors related to affective states (e.g., emotions, arousal, valence, and stress). Such methods serve as the first step toward remote mental-health management tools, urgently needed in Switzerland and worldwide, particularly in the wake of COVID-19. However, with AI systems influencing human lives so intimately, end-users must be able to understand those systems. Unfortunately, today’s groundbreaking AI methods are black-boxed (i.e., the decision model and the process are not understandable) and require vast amounts of data – data that are particularly sensitive in the case of affective computing. Hence, it is evident that explainable and privacy-aware affective computing is vital – what can be more private than our emotions, and how can we be sure that the AI “got it right”? This project has three main objectives to pave the way toward eXplAInable and Private Affective Computing (XAI-PAC):

(Objective 1): To develop novel eXplainable AI (XAI) tools for affective-computing methods based on wearable sensor data. The existing XAI solutions are static (i.e., do not allow user interaction) and applicable mainly to image- and text-based models that use only a single data modality. Furthermore, the latest AI systems usually rely on several concurrent machine learning (ML) components, but the existing XAI solutions typically explain only one model within a system. Thus, XAI-PAC will develop multimodal and interactive XAI tools to explain ML models and the interaction between several related ML components.

(Objective 2): To develop privacy-aware ML methods for wearable sensor data. Federated learning is a relatively new field that promises accurate models while avoiding transmitting potentially sensitive data to a central server. However, federated learning still faces many challenges, such as federated model selection, evaluation, and hyperparameter optimization. Wearable computing magnifies these challenges, adding noise, limited data, and domain shift to the mix. To address these challenges, XAI-PAC will explore privacy-aware personalization and domain-adaptation techniques for wearable sensor data.

(Objective 3): To fuse privacy awareness and explainability into a single approach. In order to achieve private and explainable affective computing, the first two goals are crucial. They are independent as well, and their realization leads to significant advancements in the corresponding fields. The third objective leaps beyond the independent realization of XAI and privacy-aware ML. However, this is not trivial because most XAI tools do not operate under privacy constraints. For instance, differential privacy and federated learning add noise to the ML models to obscure user-sensitive data. Applying XAI tools to federated models could lead to noisy and invalid explanations that do not provide explanatory information.

We will first develop models for monitoring affect and affect-related factors using data from wearable sensors and questionnaires collected in a longitudinal real-life study. To speed up the development, we will utilize ML tools developed by the PI in his award-winning Ph.D. thesis. The tools include processing pipelines for building accurate models, including deep learning, classical ML, and meta-learners. These tools will then be augmented using the federated learning and domain-adaptation techniques, ensuring privacy awareness and compatibility with the latest ML. We will also develop counterfactual explainers that operate under privacy constraints. The explainers will handle multimodal data and dynamic user interaction and will explain the overall sensing system. We will evaluate the XAI tools in end-user studies to ensure the quality of the produced explanations.

The project foresees collaborations with world-leading experts: Prof. Picard from MIT (author of the first book on Affective Computing); Prof. Lane from the University of Cambridge and Samsung AI (an expert in wearables and federated learning), Prof. Langheinrich from USI, Lugano (an expert in privacy), and Prof. Miller from the University of Melbourne (an expert in XAI).

XAI-PAC will set the basis for an XAI lab at USI, specializing in privacy-aware and explainable AI. The broader impact of XAI-PAC covers health and safety-critical applications – ensuring their understandability and transparency – and foundational research that provides innovative techniques for building accurate, robust, data-efficient, and privacy-aware personalized models.

Additional information

Start date
01.09.2023
End date
31.08.2027
Duration
48 Months
Funding sources
SNSF
Status
Active
Category
Swiss National Science Foundation / Ambizione