Posters presented at ICT.Open
EPI Framework: Approach for traffic redirection through containerised network functions
Jamila Alsayed Kassem
On the road towards personalised medicine, secure
data-sharing is an essential prerequisite to enable healthcare use-cases
(e.g. training and sharing machine learning models, wearables
data-streaming, etc.). On the other hand, working in silos is still
dominating today’s health data usage. A significant challenge to address
here, is to set up a collaborative data-sharing environment that will
support the requested application while also ensur- ing uncompromised
security across communicating nodes. We need a framework that can adapt
the underlying infrastructure taking into account norms and policy
agreements, requested application workflow, and network and security
policies. The framework should process and map those requirements into
setup actions. On a low packet level, the framework should be able to
enforce the setup route via intercepting and redirecting traffic.
extended abstract here
|

|
|
|
Impact of non-lID data on the performance and fairness of deferentially private federated learning.
Saba Amiri
Federated Learning enables distributed data holders to train a shared
machine learning model on their collective data. It provides some
measure of privacy by negating the need for the participants to share
their private data, but still has been shown in the literature to be
vulnerable to adversarial attacks. Differential Privacy has been shown
to provide rigorous guarantees and sufficient protection against
different kinds of adversarial attacks and has been widely employed in
recent years to perform privacy preserving machine learning. One common
trait in many of recent methods on federated learning and federated
differentially private learning is the assumption of IID data, which in
real world scenarios most certainly does not hold true. In this work, we
perform comprehensive empirical investigation on the effect of non-IID
data on federated, differentially private, deep learning. We show the
non IID data to have a negative impact on both performance and fairness
of the trained model and discuss the trade off between privacy, utility
and fairness. Our results highlight the limits of common federated
learning algorithms in a differentially private setting to provide
robust, reliable results across underrepresented groups.
|

|
|