April 7, 2022: Enabling Personalized Interventions (EPI)

Consortium Meeting and contributions at ICT.Open.

Consortium meeting:

When Thursday, April 7th 2022
Where @ ICT.OPEN in the Rai, Amsterdam
Consortium  and dissemination meeting
11:30 Start Quarterly Consortium Meeting

Eline van Dulm
Welcome, overview of EPI

11:40 Presentations PhD students

Rosanne Turner
RQ1-2: Real-time evidence collection in data streams


Saba Amiri RQ4: Private Federated Machine Learning The EPI Project.

Milen Girma Kebede
RQ5: Automating regulatory constraints and data governance in healthcare


Jamila Alsayed Kassem
RQ6: The EPI Framework: A dynamic infrastructure to support healthcare use cases.

Tim Muller
Brane status update


Corinne Allaart RQ3: Vertically partitioned machine learning for prediction of cerebrovascular accident (CVA) rehabilitation.
12:40 EPI Proof Of Concept

Marc van Meel EPI PoC – collaboration UMCU & St. Antonius

12:55 Any other business

13:00 Closure

Posters presented at ICT.Open

EPI Framework: Approach for traffic redirection through containerised network functions

Jamila Alsayed Kassem

On the road towards personalised medicine, secure data-sharing is an essential prerequisite to enable healthcare use-cases (e.g. training and sharing machine learning models, wearables data-streaming, etc.). On the other hand, working in silos is still dominating today’s health data usage. A significant challenge to address here, is to set up a collaborative data-sharing environment that will support the requested application while also ensur- ing uncompromised security across communicating nodes. We need a framework that can adapt the underlying infrastructure taking into account norms and policy agreements, requested application workflow, and network and security policies. The framework should process and map those requirements into setup actions. On a low packet level, the framework should be able to enforce the setup route via intercepting and redirecting traffic.

extended abstract here

Impact of non-lID data on the performance and fairness of deferentially private federated learning.

Saba Amiri

Federated Learning enables distributed data holders to train a shared machine learning model on their collective data. It provides some measure of privacy by negating the need for the participants to share their private data, but still has been shown in the literature to be vulnerable to adversarial attacks. Differential Privacy has been shown to provide rigorous guarantees and sufficient protection against different kinds of adversarial attacks and has been widely employed in recent years to perform privacy preserving machine learning. One common trait in many of recent methods on federated learning and federated differentially private learning is the assumption of IID data, which in real world scenarios most certainly does not hold true. In this work, we perform comprehensive empirical investigation on the effect of non-IID data on federated, differentially private, deep learning. We show the non IID data to have a negative impact on both performance and fairness of the trained model and discuss the trade off between privacy, utility and fairness. Our results highlight the limits of common federated learning algorithms in a differentially private setting to provide robust, reliable results across underrepresented groups.