When fake looks all too real: the technology behind Deep Fake
In cooperation with the Faculty of Science
It seems impossible that Barack Obama would use swear words to describe Donald Trump on camera. And it is – yet we can see him do it in a completely realistic video. Technology known as 'Deep Learning' allows us to mimic people’s voices, and generate images and video's that are increasingly difficult to distinguish from real. It allows us to produce endless amounts of text that, at least on shallow reading, looks like it is written by humans. In a world increasingly worried about fake news seen as real, as well as real news labelled as fake, such "Deep Fake" applications add much fuel to the fire. ‘You thought fake news was bad? Deep fakes are where truth goes to die’, The Guardian headlined ominously in 2018.
At the same time, Deep Learning is booming and the same technology also brings many benefits, ranging from speech synthesis for patients with speech impairments, to video entertainment. In this program, we bring a number of experts on deep learning in language and speech processing and computer vision together to explain some of the key ideas powering the technology, show some of the benign and malign uses of it, and discuss ways in which Artificial Intelligence researchers can help avoid misuse. What kind of techniques make Deep Fakes possible? What features may help us distinguish real speech, images, body movements and text from fake? Can the technology that creates the problem, also rescue us from it?
About the speakers
Dieuwke Hupkes is finishing a PhD on Deep Learning Models in Natural Language Processing. Specialising in natural language processing, she has published a series of articles on deep learning, NLP and language evolution, and is co-organizer of the BlackboxNLP workshop in Florence (2019) and the Lorentz workshop on Compositionality in Humans and Machines in Leiden (2019).
Jelle Zuidema is Associate Professor of Computational Linguistics at the Institute for Logic, Language and Computation (ILLC). He studied artificial intelligence at Utrecht University, obtained a PhD in Linguistics at the University of Edinburgh, and worked at the Sony Computer Science Laboratory in Paris, the Free University in Brussels, and the University of Leiden before joining ILLC. In the last 7 years, his research is focused on improving and interpreting deep neural network models of language processing, and on building bridges with the biology and neuroscience of language.
Paul Boersma is Professor of Phonetics and Phonology at the ACLC, UvA.
You can sign up for this program for free. If you subscribe for the program we count on your presence. If you are unable to attend, please let us know via firstname.lastname@example.org | T: +31 (0)20 525 8142.
Spui 25-27 | 1012 WX AmsterdamGa naar detailpagina
+31 (0)20 525 8142