Legal Issues

Deep-Fake Advances in technology bring convenience to humans, but at the same time bring dark-sides. The biggest social issue that emerged after the invention of deep learning is “Deep Fake”. It is a portmanteau of “deep learning” and “fake”. Deep Fake is a synthetic medium that replaces the faces of existing characters or people in images or videos with other people's appearances. These Deep Fake issues use artificial intelligence (AI) and machine learning techniques to manipulate or create visual and voice content deftly. The main machine learning methods used in Deep Fake production are based on deep learning, and examples of social issues or crimes that can occur through Deep Fake include evidence of forgery through revenge pornography or face synthesis. Deep learning technology is becoming more and more advanced, making it more difficult to distinguish between real and fake. On the other hand, however, legal or technical solutions that can solve this problem are falling short of Deep Fake's pace of development. According to Hoque (2021), “Someone can easily use face-swapping methods to construct fake porn images and videos, and they can further use them as revenge porn. With face editing methods, such attackers can remove clothes on different body parts in the images and videos” (p. 4). In addition, Hoque (2021) states, “Even people can be made invisible in the videos, which can have severe consequences when video recordings are essential for evidence in a legal court” (p. 4). Deep Fake also reduces credibility on the Internet, a flood of information. The spread of fake news or fake information caused by Deep Fake deteriorates people's credibility and faith in the media. Raymond (2019) mentions, “You need only think of the damaging and divisive role played by social media in the US and other elections, and Brexit, to realize the potential damage well-crafted deep fakes could cause. In fragile democracies divided by strongman politics and cultural and tribal divides, the potential for using them to stir up hate and violence is a very real possibility” (para. 8). Deep Locker Another issue caused by deep learning is security, which makes security vulnerable due to malicious code using deep learning technology. A case in point is a malicious code called Deep Locker, which embeds malicious code in a neural network to allow webcam access and monitor users of an application. According to Ben (2018), “But DeepLocker, the proof-of-concept malware developed by IBM, showed that such attacks might soon become the normal modus operandi of malicious hackers. DeepLocker had embedded its malicious behavior and payload into a neural network to hide it from endpoint security tools, which usually look for signatures and predefined patterns in the binary files of applications” (para. 34).