Threats from synthetic media such as represent a growing challenge for all users of modern technologies and communications.
Synthetic media threats such as deepfakes pose a growing challenge to all users of modern technologies and communications. As with many technologies, synthetic media techniques can be used for both positive and malicious purposes, as shown in the paper "Contextualizing Deepfake Threats to Organizations" by the NSA and FBI and described in more detail here. Although there is limited evidence of significant use of synthetic media techniques by malicious state-sponsored actors, the increasing availability and effectiveness of these techniques to less capable malicious cyber actors suggest that these techniques are likely to increase in frequency and sophistication. Synthetic media threats consist largely of technologies related to the use of text, video, audio, and images used online and in conjunction with all types of communications. Deepfakes are a particularly worrisome type of synthetic media that use artificial intelligence/machine learning (AI/ML) to create believable and highly realistic media. The most serious threats from synthetic media misuse include techniques that compromise an organization's brand, impersonate executives and financial officers, and use fraudulent communications to gain access to an organization's networks, communications, and sensitive information.
Organizations can take several steps to identify, defend against, and respond to deepfake threats. They should consider using a range of technologies to detect deepfakes and determine the origin of media, including real-time inspection capabilities, passive detection techniques. Organizations can also take steps to minimize the impact of malicious deepfake techniques, including information sharing, planning and rehearsing responses to exploitation attempts, and employee training. Deepfake phishing in particular will be an even greater challenge than it is today, and organizations should proactively prepare to identify and combat it.
There are several terms used to describe media that has been synthetically created and/or manipulated. Some of the most common terms are: Deep Fakes, Faux-to, and AI-generated media.
Multimedia content that has been manipulated using non-machine/deep learning techniques, which in many cases can be just as effective as more sophisticated techniques, is often referred to as superficial or cheap. These fakes are often created by manipulating an original message in a real medium. Some explicit examples include:
Multimedia content that has either been (fully synthesized) created or (partially synthesized) edited using machine/deep learning (artificial intelligence) is referred to as deepfakes. Some explicit examples include:
As of 2023, Generative AI is gaining popularity for many capabilities that produce synthetic media. Generative AI (machine learning techniques), such as Generative Adversarial Networks, Diffusion Models, and Large Language Models (or a combination thereof) are the machines that enable the production of highly realistic synthetic multimedia content based on much larger data sets.
CGI is the use of computer graphics to create or enhance visual media (images and video). Traditionally, these methods have been the standard for visual effects in most major motion pictures, but now that Generative AI techniques are getting better and cheaper, these two technologies are being merged to produce even more convincing fakes.
For several years, public and private organizations have been raising concerns about tampered multimedia content and developing means to detect and identify countermeasures. Many partnerships have now emerged between public and private stakeholders focused on cooperative efforts to detect these manipulations and verify / authenticate the multimedia content. There are many differences between detection and authentication efforts, as they have different goals. The biggest difference is that detection methods are often passive forensic techniques, while authentication methods are active forensic techniques that are specifically embedded at the time the media in question is ingested or edited. Detection efforts typically focus on developing methods that look for indications of tampering and display those indications in the form of a numerical output or visualization to alert an analyst that the media is in need of further analysis. These methods are developed under the assumption that modifications to original data or fully synthetic media have statistically significant traces that can be found. This form of detection is a cat-and-mouse game; while detection methods are developed and made publicly available, there is often a rapid response to counter them. However, until there is universal adoption of authentication standards, these methods are necessary to support forensic analysis.
Authentication methods are designed to be embedded at the time of capture/creation or editing to make the origin of the media transparent. Some examples include digital watermarks that can be used in synthetically generated media, active signals in real-time recordings to verify liveness, and cryptographic asset hashing on a device.
Public concern about synthetic media also relates to their use in disinformation operations aimed at influencing the public and spreading false information about political, social, military, or economic issues to cause confusion, unrest, and uncertainty. However, the synthetic media threats that organizations most often face involve activities that can jeopardize the brand, financial condition, security, or integrity of the organization itself. The most significant synthetic media threats to the Department of Defense, National Security Strategy, defense industry, and critical infrastructure organizations include potential impacts and risks, including, but not limited to:
Malicious actors can use deepfakes, in which audio and video are manipulated, to attempt to impersonate executives and other high-level personnel in an organization. They can use convincing audio and video impersonations of key executives to damage an organization's reputation and brand value by quickly publicly disseminating a convincing deepfake via social media before it can be stopped or refuted. Manipulated media operations targeting high-profile political figures such as Ukrainian President Volodymr Zelenskyy have been observed to spread disinformation and confusion. This technique can have a major impact, especially on international brands whose stock prices and overall reputations are vulnerable to disinformation campaigns. Given the high impact, this type of deepfake is a significant concern for many CEOs and government leaders.
Malicious actors, many of whom are likely cybercriminals, often use various types of manipulated media in social engineering campaigns for financial gain. This includes impersonating key executives or financial officers and using various media, such as manipulated audio, video or text, to authorize the unauthorized release of funds into accounts held by the malicious actor. Business email compromise (BEC) scams are among these types of social engineering and have cost organizations hundreds of millions of dollars in losses. Similar techniques can also be used to manipulate the trading or selling of cryptocurrencies. In practice, such types of scams are widespread and several partners reported being the target of such operations.
Malicious actors can use the same types of manipulated media techniques to gain access to an organization's employees, operations, and information. This can include techniques such as using manipulated media during job interviews, especially for remote jobs. In 2022, malicious actors reportedly used synthesized audio and video during online interviews, although the content was often not coherent or synchronized, indicating the fraudulent nature of the calls. These attempts were enabled by stolen personal information.
Schedule a no-obligation initial consultation with one of our sales representatives. Use the following link to select an appointment: