Blog post -
Deepfakes: a Communications Director's friend or foe?
Imagine you are a Communications Director and you receive an audio message from your country manager asking you to take a certain action, or perform a certain transaction. You listen with half an ear and don’t give it a second thought. You do as you’re asked, only to find out later that the audio was a deepfake, using samples from an investor roadshow.
This real-life scenario was shared by an audience member during our recent event for the Asia-Pacific Association of Communication Directors (APACD) titled "Deepfakes: A Comms Person's Friend or Foe?", held at the HBM studio on May 8 (watch the video).
“For those of us who work closely with him, we realized it was very close, it's 90%,” she said, “If you were doing your email, or you were just partially distracted, you would have believed it.”
The anecdote prompted participants to evaluate their own strategies – or lack thereof – and consider bolstering their defences against potential deepfake threats.
Together with our special guest, ESSEC Asia Pacific Professor of Statistics and Machine Learning, Pierre Alquier, participants came to four conclusions:
1. Deepfake technology will only continue to improve
We kickstarted our conversation by showing four videos and asked our guests to spot the real video amongst the deepfakes.
Slightly inaccurate mouth movements by the speaker gave it away, but as Prof Alquier pointed out, the software will continue to develop to a point where viewers will not be able to tell a genuine video from a deepfake.
“It is feasible to produce a video where human beings will not see the difference.”
2. We need protocols
Deepfakes are already being used my malicious actors, but are we prepared for them?
Our guests concluded to defend against this threat comms departments must create and implement processes and guidelines for dealing with deepfakes. They exchanged insights on possible protocols comms directors could adopt.
Below you will find a sample protocol for when such deepfake scams arise.
3. We need to educate audiences
Our guests emphasized the importance of educating audiences to go back to the phrase, “believe half of what you see and none of what you hear”. This starts with parents teaching their children to be sceptical of videos, and also how to not be ‘deepfaked’.
“Part of the education would be also: don’t post online all your holiday videos if it's not necessary, because if there is too much photos of you online, it would be easier to make a deepfake,” one guest said.
We also covered the use of hotlines to verify the authenticity of video and audio.
4. Deepfakes also have a positive role to play
Despite the concerns surrounding deepfakes, our guests also highlighted positive examples of harnessing AI.
One of our guests revealed they are already using deepfake technology to create training courses in multiple languages.
Prof Alquier shared that some Japanese dementia clinics use AI “nurses” to regularly talk to patients.
Deepfakes could provide live language translation with cultural nuances, and even sign language.
The conversation underscored the imperative for communication professionals to get ahead of the curve by raising awareness in their organisations of both the threat and benefits of AI.
Ultimately, the answer to the question our event asked is that deepfakes are not friend or foe, but friend and foe.