20190903_deep-fake
The technology behind Deep Fake

When fake looks all too real

In this program, experts will explain the technology known as ‘Deep Learning’. They will show examples of benign and malign uses such as Deep Fake applications and discuss ways in which Artificial Intelligence researchers can help avoid misuse. With: Dieuwke Hupkes, Jelle Zuidema and Paul Boersma.

It seems impossible that Barack Obama would use swear words to describe Donald Trump on camera. And it is – yet we can see him do it in a completely realistic video. Technology known as ‘Deep Learning’ allows us to mimic people’s voices, and generate images and video’s that are increasingly difficult to distinguish from real. It allows us to produce endless amounts of text that, at least on shallow reading, looks like it is written by humans. In a world increasingly worried about fake news seen as real, as well as real news labelled as fake, such “Deep Fake” applications add much fuel to the fire. ‘You thought fake news was bad? Deep fakes are where truth goes to die’, The Guardian headlined ominously in 2018.

At the same time, Deep Learning is  booming and the same technology also brings many benefits, ranging from speech synthesis for patients with speech impairments, to video entertainment. In this program, we bring a number of experts on deep learning in language and speech processing and computer vision together to explain some of the key ideas powering the technology, show some of the benign and malign uses of it, and discuss ways in which Artificial Intelligence researchers can help avoid misuse. What kind of techniques make Deep Fakes possible? What features may help us distinguish real speech, images, body movements and text from fake? Can the technology that creates the problem, also rescue us from it?

About the speakers

Dieuwke Hupkes is finishing a PhD on Deep Learning Models in Natural Language Processing. Specialising in natural language processing, she has published a series of articles on deep learning, NLP and language evolution, and is co-organizer of the BlackboxNLP workshop in Florence (2019) and the Lorentz workshop on Compositionality in Humans and Machines in Leiden (2019).

Jelle Zuidema is Associate Professor of Computational Linguistics at the Institute for Logic, Language and Computation (ILLC). He studied artificial intelligence at Utrecht University, obtained a PhD in Linguistics at the University of Edinburgh, and worked at the Sony Computer Science Laboratory in Paris, the Free University in Brussels, and the University of Leiden before joining ILLC. In the last 7 years, his research is focused on improving and interpreting deep neural network models of language processing, and on building bridges with the biology and neuroscience of language.

Paul Boersma is Professor of Phonetics and Phonology at the ACLC, UvA.

Gerelateerde programma’s
17 06 20
Moeten we bang zijn voor AI?

Gezichtsherkenning, zelfrijdende auto’s, algoritmisch nepnieuws, Tinder-matches, deepfakes en sollicitatiegesprekken met een computerprogramma. Of we het nu willen of niet, we worden omringd door artificiële intelligentie. Maar wat betekent dat? Naar aanleiding van het nieuwe boek van Stefan Buijsman buigen we ons over de gevaren van AI. Met: Stefan Buijsman, Ewoud Kieft, Maaike Harbers en Miriam Rasch (moderator).

Datum
Woensdag 17 jun 2020 17:00 uur
Locatie
Online