By Year By Month By Week Today Search Jump to month

IEEE TEMS: “Predictive Privacy, or the Risk of Secondary Use of Trained ML Models" (DigHum-Series)

Download as iCal file
 
Tuesday, 17. October 2023, 17:00 - 19:00
Category: Lectures & Presentations | created by This email address is being protected from spambots. You need JavaScript enabled to view it.

https://events.vtools.ieee.org/tego_/event/manage/377260

We are pleased to invite you to our next talk in our Lecture Series:

Tuesday, October 17, 2023 at 5:00 p.m. (17:00) Central European Time (UTC+2)

Topic: “Predictive Privacy, or the Risk of Secondary Use of Trained ML Models”
(scroll down for abstract and CV)

Speaker: Rainer Mühlhoff (University of Osnabrück, Germany)
Moderator: Klaus Staudacher (bidt, Germany)

To participate in the talks via Zoom go to: https://tuwien.zoom.us/j/96389928143?pwd=UU5YRkNuRmdoWHV4MFBwMWRCcUErdz09
(Password: 0dzqxqiy)

The talk will be live streamed and recorded on our YouTube channel: https://www.youtube.com/digitalhumanism

For further announcements and information about the speakers in the Lecture Series, see https://dighum.org/#latest-news. Please note that you can access the slides and recordings of our past events via that link.

In case you missed the last event you can watch the recording of “Why does AI need to be Trustworthy and the axis of power between Big Tech and society” by Kay Firth-Butterfield.

And an announcement: The Digital Humanism Fellowship Conference "Can Machines Save the World?" November 16 - 17, Vienna.

We are looking forward to seeing you!

Hannes Werthner
-----
Sign the Vienna Manifesto: https://dighum.ec.tuwien.ac.at
Follow us on Twitter @DigHumTUWien
-----

ABSTRACT “Predictive Privacy, or the Risk of Secondary Use of Trained ML Models”:

Big data and artificial intelligence (AI) pose a new challenge for data protection when these techniques are used to make predictions about individuals. This could happen both to individuals who are not in the training data and in context of secondary use of trained models. In this talk I will use the ethical notion of “predictive privacy” to argue that trained models are the biggest blind spot in current data protection regimes and other regulatory projects concerning AI. I argue that the mere possession of a trained model constitutes an enormous aggregation of informational power that should be the target of regulation even before the application of the model to concrete cases. This is because the model has the potential to be used and reused in different contexts with few legal or technical barriers, even as a result of theft or covert business activities. The current focus of data protection on the input stage distracts from the - arguably much more serious - data protection issue related to trained models and, in practice, leads to a bureaucratic overload that harms the reputation of data protection by opening the door to the denigrating portrayal of data protection as an inhibitor of innovation.

Short Bio of Rainer Mühlhoff:

Rainer Mühlhoff, philosopher and mathematician, is Professor of Ethics of Artificial Intelligence at the University of Osnabrück. He researches ethics, data protection and critical social theory in the digital society. In his interdisciplinary work, he brings together philosophy, media studies and computer science to investigate the interplay of technology, power and social change. Contact and further information: https://RainerMuehlhoff.de/en/

Short Bio of Klaus Staudacher:

Klaus Staudacher is a Researcher at the bidt. He holds a Master’s degree in Philosophy (Philosophy, French and International Law) and has worked as a graduate teaching assistant at the chair of Philosophy and Political Theory (Prof. J. Nida-Rümelin), LMU Munich, along with other chairs of practical philosophy. His main research interests comprise issues of moral responsibility in AI systems and more generally ethical questions to do with digital transformation.

Location online: see above
Contact This email address is being protected from spambots. You need JavaScript enabled to view it.