AI in Cyber SecurityJan Kahmen10 min read

Classification of Threats From Deepfakes

Threats from synthetic media such as represent a growing challenge for all users of modern technologies and communications.

Table of content

Synthetic media threats such as deepfakes pose a growing challenge to all users of modern technologies and communications. As with many technologies, synthetic media techniques can be used for both positive and malicious purposes, as shown in the paper "Contextualizing Deepfake Threats to Organizations" by the NSA and FBI and described in more detail here. Although there is limited evidence of significant use of synthetic media techniques by malicious state-sponsored actors, the increasing availability and effectiveness of these techniques to less capable malicious cyber actors suggest that these techniques are likely to increase in frequency and sophistication. Synthetic media threats consist largely of technologies related to the use of text, video, audio, and images used online and in conjunction with all types of communications. Deepfakes are a particularly worrisome type of synthetic media that use artificial intelligence/machine learning (AI/ML) to create believable and highly realistic media. The most serious threats from synthetic media misuse include techniques that compromise an organization's brand, impersonate executives and financial officers, and use fraudulent communications to gain access to an organization's networks, communications, and sensitive information.

Organizations can take several steps to identify, defend against, and respond to deepfake threats. They should consider using a range of technologies to detect deepfakes and determine the origin of media, including real-time inspection capabilities, passive detection techniques. Organizations can also take steps to minimize the impact of malicious deepfake techniques, including information sharing, planning and rehearsing responses to exploitation attempts, and employee training. Deepfake phishing in particular will be an even greater challenge than it is today, and organizations should proactively prepare to identify and combat it.

Types of Deepfakes Threats

There are several terms used to describe media that has been synthetically created and/or manipulated. Some of the most common terms are: Deep Fakes, Faux-to, and AI-generated media.

Superficial Fakes

Multimedia content that has been manipulated using non-machine/deep learning techniques, which in many cases can be just as effective as more sophisticated techniques, is often referred to as superficial or cheap. These fakes are often created by manipulating an original message in a real medium. Some explicit examples include:

  • Selectively copying and pasting content from an original scene to remove an object in an image to change the story.
  • Slowing down a video by adding repeated frames to make it sound like a person is drunk.
  • Combining audio clips from another source and replacing the audio in a video to change the story.
  • Using false text to advance a narrative and cause financial loss and other repercussions.

Deepfakes

Multimedia content that has either been (fully synthesized) created or (partially synthesized) edited using machine/deep learning (artificial intelligence) is referred to as deepfakes. Some explicit examples include:

  • LinkedIn experienced a huge increase in deepfake images as profile pictures in 2022.
  • An artificial intelligence-created scene created by a hallucination contains invented information that seems plausible but is not true. It shows an explosion near the Pentagon and was shared on the Internet in May 2023, causing general confusion and excitement on the stock market.
  • A Deepfake video showed Ukrainian President Volodomyr Zelenskyy calling for his country to surrender to Russia.
  • More recently, several Russian television stations and radio stations were hacked, and an alleged Deepfake video of President Vladimir Putin was broadcast, claiming that he had declared a state of emergency in Russia due to an alleged invasion of Ukraine.
  • Another example of technology evolving in videos is text-to-video diffusion models, which are fully synthetic videos developed by artificial intelligence.
  • In 2019, Deepfake recordings were used to steal $243,000 from a company in the UK. More recently, there has been a massive increase in personalized fraudulent activity through the use of sophisticated and intensively trained AI voice clone models.
  • Openly available Large Language Models (LLMs) are now being used to generate the text for phishing emails.

Generative AI

As of 2023, Generative AI is gaining popularity for many capabilities that produce synthetic media. Generative AI (machine learning techniques), such as Generative Adversarial Networks, Diffusion Models, and Large Language Models (or a combination thereof) are the machines that enable the production of highly realistic synthetic multimedia content based on much larger data sets.

Computer Generated Imagery (CGI)

CGI is the use of computer graphics to create or enhance visual media (images and video). Traditionally, these methods have been the standard for visual effects in most major motion pictures, but now that Generative AI techniques are getting better and cheaper, these two technologies are being merged to produce even more convincing fakes.

Recognition vs. Authentication

For several years, public and private organizations have been raising concerns about tampered multimedia content and developing means to detect and identify countermeasures. Many partnerships have now emerged between public and private stakeholders focused on cooperative efforts to detect these manipulations and verify / authenticate the multimedia content. There are many differences between detection and authentication efforts, as they have different goals. The biggest difference is that detection methods are often passive forensic techniques, while authentication methods are active forensic techniques that are specifically embedded at the time the media in question is ingested or edited. Detection efforts typically focus on developing methods that look for indications of tampering and display those indications in the form of a numerical output or visualization to alert an analyst that the media is in need of further analysis. These methods are developed under the assumption that modifications to original data or fully synthetic media have statistically significant traces that can be found. This form of detection is a cat-and-mouse game; while detection methods are developed and made publicly available, there is often a rapid response to counter them. However, until there is universal adoption of authentication standards, these methods are necessary to support forensic analysis.

Authentication methods are designed to be embedded at the time of capture/creation or editing to make the origin of the media transparent. Some examples include digital watermarks that can be used in synthetically generated media, active signals in real-time recordings to verify liveness, and cryptographic asset hashing on a device.

How Deepfakes can Threaten Organizations

Public concern about synthetic media also relates to their use in disinformation operations aimed at influencing the public and spreading false information about political, social, military, or economic issues to cause confusion, unrest, and uncertainty. However, the synthetic media threats that organizations most often face involve activities that can jeopardize the brand, financial condition, security, or integrity of the organization itself. The most significant synthetic media threats to the Department of Defense, National Security Strategy, defense industry, and critical infrastructure organizations include potential impacts and risks, including, but not limited to:

Chief Impersonator for Brand Manipulation

Malicious actors can use deepfakes, in which audio and video are manipulated, to attempt to impersonate executives and other high-level personnel in an organization. They can use convincing audio and video impersonations of key executives to damage an organization's reputation and brand value by quickly publicly disseminating a convincing deepfake via social media before it can be stopped or refuted. Manipulated media operations targeting high-profile political figures such as Ukrainian President Volodymr Zelenskyy have been observed to spread disinformation and confusion. This technique can have a major impact, especially on international brands whose stock prices and overall reputations are vulnerable to disinformation campaigns. Given the high impact, this type of deepfake is a significant concern for many CEOs and government leaders.

Manipulation for Financial Gain

Malicious actors, many of whom are likely cybercriminals, often use various types of manipulated media in social engineering campaigns for financial gain. This includes impersonating key executives or financial officers and using various media, such as manipulated audio, video or text, to authorize the unauthorized release of funds into accounts held by the malicious actor. Business email compromise (BEC) scams are among these types of social engineering and have cost organizations hundreds of millions of dollars in losses. Similar techniques can also be used to manipulate the trading or selling of cryptocurrencies. In practice, such types of scams are widespread and several partners reported being the target of such operations.

Manipulations for the Purpose of Access

Malicious actors can use the same types of manipulated media techniques to gain access to an organization's employees, operations, and information. This can include techniques such as using manipulated media during job interviews, especially for remote jobs. In 2022, malicious actors reportedly used synthesized audio and video during online interviews, although the content was often not coherent or synchronized, indicating the fraudulent nature of the calls. These attempts were enabled by stolen personal information.

Contact

Curious? Convinced? Interested?

Schedule a no-obligation initial consultation with one of our sales representatives. Use the following link to select an appointment: