Ga naar de inhoud

Deepfake is a ‘neutral’ technology!

Update 11-10-2024: As an ex-colleague of my mentioned: technology is never neutral, because of the impact it has on the way we perceive and interact with the world, so that’s why I put a more subjective element in the title.

So let that sink in! Deepfake is a ‘neutral’ technology! Wow, I hadn’t thought about it until just recently but it actually is. After visiting a lecture of prof dr. Theo Gevers about positive effects of deepfake, it occurred to me that Deepfake just has a big image problem but on its own, it’s neutral. This realization lingers in my mind for a couple of weeks now and I want to get it out of my system and onto paper 🙂

Van der Sloot et al. (2021) from Tilburg University explain Deepfake as “A deepfake is content (video, audio or otherwise) that is wholly or partially fabricated or existing content (video, audio or otherwise) that has been manipulated.” Well, if we look at our own bias from a society perspective, these words already give a bad taste. Just look at the words ‘fabricated / manipulated’. Combined with how the technology is mentioned in the media for years on end now, you mostly hear and read about negative implementations of this technique. Think of (online) scams where somebody uses deepfake technology so he can pose to be someone he is not. Ripping audio files to replicate the voice of someone and use that to ask for money via Whatsapp, or all kinds of filters on Instagram and TikTok like the bold glamour filter, as a live deepfake tool which makes you more ‘beautiful’ (if that’s your taste 😀 ). And how about all the deepfake images that Trump uses in his political campaign? So yeah, based on how the media depicts deepfake technology for years now, it has a certain image. But… it’s just a neutral technology. The way it’s implemented is the ‘problem’ and this creates the bias surrounding this technique. So let me give you some positive deepfake technology examples, to counter this bias and to reinforce my statement that in the end, deepfake is just a neutral technology.

Some positive examples which uses deepfake technology

One of the best examples where deepfake technology is used for the better, is the ongoing experiment in trauma processing. One of the experiments is where traumatized victims of violence in different forms, can have a live ‘conversation’ with the perpetrator who is shown via a real-time avatar on a screen. Behind-the-scenes, a therapist controls this avatar so the therapist can use his expertise to make this session a base for mental healing. The conversation is staged of course, but the impact of this kind of therapy sessions already shows very positive results.

As from my personal experience, deepfake technology can improve and further close the gap of different languages. Tools like Heygen give users the option to live translate video to a different language, with full deepfake lip sync. The result is rather astounding already. In a couple of minutes, I had a short video in Dutch translated in the Czech language to welcome exchange students from that country! Another application of Heygen that worked wonders was during an ai hackathon we organized on Fontys Hogeschool Tilburg. In Midjourney, students created a digital persona for their ad campaign and let that persona come to ‘life’ with Heygen. The results was a static image of a persona that suddenly moved and talked with fully integrated lipsync!

I’m a Dutch dude, but right now I’m speaking fluently in the Czech language!

In the entertainment industry, lots of deepfake technology is used. Think of older actors like Robert de Niro and Harrison Ford who are ‘de-aged’ with the help of deepfake algorithms or the way Myheritage uses deepfake technology that transforms static pictures of people in short videos where they smile and talk.

So what now?

I felt that deepfake technology has a hard time in the image department 🙂 Maybe it doesn’t surprise you as much as it did myself, but the realization that deepfake technology is a ‘neutral’ technology can give some interesting perspectives on its use-cases now and in the future. The word ‘fake’ has a negative bias but it’s what we’re actually doing with the technology that makes it ‘good’ or ‘bad’. With generative ai, the possibility of creating something with deepfake technology gets easier by the day and that could make for bad intentions. But that’s not the only story there is for deepfake technology, let’s invest more effort into using deepfake technology for good!

1 gedachte over “Deepfake is a ‘neutral’ technology!”

  1. Interesting perspective. I guess there’s a lot of people with very bad reputations that can be perfectly nice at times, and that would apply for tech too. And you’re right: this tech just being helpful and neutral most of the times, makes it hard or impossible to ‘stop’ or regulate it. Not like we kind-of control the proliferation of nuclear arms or suicide pills.

    As an educator though, I feel a strong need to stop anyone who will listen from using this technology on other people then yourself. If teasing people by editing their pictures or voice becomes normalized, that will lead to many painful situations.

Geef een reactie

Je e-mailadres wordt niet gepubliceerd. Vereiste velden zijn gemarkeerd met *