Talk

Is your model private?

Friday, May 24

14:45 - 15:15
RoomTagliatelle
LanguageEnglish
Audience levelAdvanced
Elevator pitch

Join me in uncovering privacy vulnerabilities of Machine Learning, and discussing attacks like Membership Inference. Learn how techniques like Differential Privacy and tools such as Opacus can mitigate the privacy risk of your models.

Abstract

As the popularity of Machine Learning models continues to soar, concerns about the risks associated with black box models have become more prominent. While much attention has been given to the development of unfair models that may discriminate against certain minorities, there exists another concern often overlooked: the privacy risks posed by ML models.

Research has substantiated that ML models are susceptible to various attacks, with one notable example being the Membership Inference attack, enabling the prediction of whether a specific sample was present during training.

Join me in this talk, where I will explain the privacy risks inherent in Machine Learning models. Beyond exploring potential attacks, I will elucidate how techniques such as Differential Privacy and tools like Opacus (https://github.com/pytorch/opacus) can play crucial roles in training more robust and secure models.

TagsPrivacy, Machine-Learning
Participant

Luca Corbucci

I’m Luca, a Computer Science PhD student at the University of Pisa, interested in Federated Learning and Privacy-Preserving ML. My passion for communities led me to co-found PointerPodcast, a podcast about tech and innovation. I’ve also co-founded two communities: Superhero Valley, fostering connections between academia and industry, and Pisa.dev, a local meetup for developers and computer scientists in Pisa.