Special issue information:
Deepfake media (image, audio, video) has recently started to appear as evidence in some isolated cases in court, e.g., a family custody dispute case in the UK where one of the parties used a deepfake audio as evidence to suggest violent behaviour against the other party (2020). It has also increasingly been used for criminal purposes, e.g., a fraud case where the CEO of an energy company in the UK was induced, via a deepfake-powered telephone call, to transfer a large sum of money (2019). The quality and performance of deepfake-based manipulation of media (i.e., face swap, attribute editing and face morphing) or fully synthetically generated deepfake media have reached a point where deepfake livestreaming is possible.
Such growing accessibility and realism of deepfake media pose a number of interesting practical questions in regards to digital investigations; e.g., questions related to authenticity, integrity and provenance of media evidence, how well multimedia forensic methods perform against deepfakes, and how to prepare for the likelihood of “deepfake defence”. It can also potentially affect court proceedings involving digital evidence, including admissibility rules and tests, and potentially negative effect on humans’ ability to make unbiased decisions, if justice stakeholders are exposed to deepfake evidence.
This special issue aims to improve understanding and foment discussion about practical implications, possible interventions and feasible solutions to the foreseen challenges of deepfake media evidence across civil and criminal case investigations and justice systems for different jurisdictions. We welcome submissions from a range of disciplines, as long as relevant in the context of deepfake media evidence.
|