China implemented comprehensive regulations restricting the creation of ‘deepfakes’ starting from October 10th.
This is considered part of measures to prevent anti-regime sentiment connected to the recent blank paper protests, making it an interesting case of a substantial prohibition measure in contrast to the European Union or the United States, where such regulations are currently limited to recommendations due to concerns regarding press freedom violations. Of course, China's ability to enforce such measures is seemingly facilitated by the Great Firewall, their internet censorship system in place for over 20 years. However, the immense burden of adhering to transparency and information disclosure requirements will likely lead to ongoing questions regarding the effectiveness of this measure both internally and externally.
From the outset of deepfake technology, the need for labeling synthetic content has been discussed. However, a clear method of ensuring this has remained elusive. Furthermore, as the adage 'the internet is forever' highlights, once created, content is exceptionally difficult to completely erase if it spreads. Moreover, even after content removal, collective consciousness among viewers does not simply disappear. This was evident in the case of the ‘Voice of April’ content that criticized the Shanghai lockdown policy, which influenced the blank paper protests.
Historically, the integration of technology into society has been unstoppable. Statistics indicate that 95% of deepfake videos globally are pornographic. Examples such as the fabricated surrender statement of the Ukrainian president during the initial stages of Russia's invasion and the use of deepfake technology for Bruce Willis, who suffers from aphasia, to appear in advertisements highlight the perilous reality of deepfake technology and its potential to impact society as a whole. However, the true aspect we should be wary of might not be the method of regulating this technology. In reality, novel ways to manipulate facts have always existed. Focusing on the latest technological advancements is a losing game. Instead, we should concentrate on why such content is created and how it spreads, focusing on the social factors that underpin the dissemination of false narratives.
“Deepfake technology is morally questionable, but not inherently wrong.”
In her research, ethicist and political philosopher Adrienne de Ruiter argued that ‘expressing views on behalf of individuals who have not consented’, ‘intentionally misleading viewers’, and ‘malicious intent’ are what render the outcomes of this technology immoral. She clarified that the creators and viewers of these deepfakes, the people themselves, separated from the technology, are the entities we should be wary of. Specifically, micro-targeted deepfake content aimed at individuals with significant social influence, such as celebrities and politicians, will inevitably make it challenging to regulate the expression of the creator's intent.
So, how should we approach future solutions to this issue? Two primary directions can be suggested.
Firstly, we must acknowledge and accept that we are living within a world of cameras and recognition systems. The author of this article, and you, the reader, spend most of your daily lives in front of mobile phones and laptops equipped with cameras.From the system's perspective, human behavior is simply material for algorithms.It is this very fact that defines us within the system.
Cameras that monitor whether a child is being well cared for exist to promote ideal relationships between parents and caregivers. However, they are also non-human entities that learn and execute the intention to restrict humans. Recognizing that we co-exist with these new entities can aid us in managing and responding to the unethical intentions associated with deepfakes.
Secondly, community-based education on this issue must be fostered and disseminated. We tend to seek a sense of belonging within weak connections within the digital realm. This is related to the absence of a sense of belonging from social groups amplified by the pandemic, leading to a desire to believe that we are connected to someone unseen based on our interests and tastes. Repeatedly checking TikTok at 2 AM, frequently accessing the not-so-reliable NamuWiki, consistently reviewing Instagram stories, and neglecting less-engaging group chats are all examples of this phenomenon.
Deepfakes often exploit this craving for a sense of belonging from weak connections; however, as there is no deep-seated interest in the target individual, the impact of related content is relatively easy to diminish. While it might be challenging for an individual to verify the authenticity of deepfake content intended to damage a politician's credibility, a project successfully differentiated between truths and falsehoods at the level of a political party. This demonstrates that educational programs based on the perspective, values, and practices of the community can be an effective solution. It also suggests that platform service providers where deepfake content is shared can create strategic opportunities by establishing and proposing unique community-based solutions for their users.
While there are positive examples of deepfake technology, such as the film 'Fast & Furious', which brought back the deceased Paul Walker by using a body double and overlaying his face onto the actor using deepfake technology, the reality is that cases like a fabricated sexual video targeting a female journalist that destroyed her life are also occurring.
It's important to remember that actors in the film industry are currently the most protected individuals from deepfake technology. However, modern society is yet to find a response for when this technology is used against ordinary people. Before we can expect legal regulations, perhaps the first step is for us to become more self-aware – to recognize our own participation in viewing deepfake content as entertainment on social platforms like TikTok.
*This article is the original content published on February 14, 2023, in theElectronic Times named column.
References
Comments0