1 / 3

Voice spoofing attacks - 1

Voice Spoofing Attacks: how AI can be used to fool an AI-based voice authentication system

Télécharger la présentation

Voice spoofing attacks - 1

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Voice Spoofing Attacks: how AI can be used to fool an AI-based ASV system Dr. Bhusan Chettri who earned his PhD in AI and Speech Technology from Queen Mary University of London explains how Automatic Verification Systems (ASV) can be fooled using AI Today’s ASV system trained on big-data and complex deep learning algorithms has shown superior performance on many benchmark datasets. They have demonstrated capability of recognising a person even using a small fragment of speech utterance. However, recent research has shown that they are not 100% secure. They are prone to fraudulent access launched through so called voice spoofing attacks. An attacker with malicious intention attempts to launch spoofing attacks using either pre-recorded voice of a target speaker (Replay attack), or generating synthetic voices using technologies such as Text-to-Speech (TTS) synthesis, Voice conversion (VC). They can also launch such spoofing attacks using impersonation or mimicry and for this to succeed the attacker must be a professional in performing the act of mimicry. Figure 1 summarises different points wherefrom attacks can be launched to fool a biometric system. Among various points of attack, the first two are of great interest as these points are more susceptible for an attacker to launch a spoofing attack. These points are generally categorised into two groups depending upon the method employed to attack: 1. Physical access (PA) this form of attack involves making use of sensors (for example microphone in case of voice biometrics) where manipulated voice is being provided. Commonly used examples include playback or so called replay attack; and mimicry or impersonation. Therefore in such form of attack the manipulated signal goes through the hardware sensors. 2. Logical access (LA) this form of attack bypasses the biometric sensors and injects the manipulated biometric data into the system directly. The two commonly used methods for launching LA attacks in voice biometric systems are Text-to-Speech and Voice conversion. In this article, Bhusan Chettri PhD Queen Mary University of London (QMUL) will be discussing more about PA attacks with a focus on replay attack. PA attack: Replay (playback) spoofing attack A replay spoofing attack involves playing back recorded speech samples of a target speaker (enrolled speaker) to bypass an ASV system. This type of attack requires physical transmission of spoofed speech through the system microphone. This is shown as point 1 in Fig. 1. This is one of the simplest form of a spoofing attack that does not require any expertise either in signal processing or machine learning. It can

  2. be implemented using smartphones that can be used to record voices and playback to another medium. A bonafide or genuine speech corresponds to speech spoken by a target speaker during enrollment (or the verification phase) and is acquired by an ASV system’s microphone. On the other hand, a replayed speech denotes the speech signal that is obtained by playing back a pre-recorded bonafide speech which is then acquired by the system’s microphone. Figure 2 illustrates bonafide and replayed speech signals. The acoustic environment for the acquisition of bonafide speech, and the replayed speech can be the same — situations where an attacker manages to launch the attack from the same physical space. But, in practice the acoustic space is usually different (eg. a different closed room/office with no background noise) as an attacker would not want to risk getting caught while launching such attacks. Therefore, factors of interest in detecting replay attacks are changes/noise induced in bonafide speech from the loudspeaker of playback device, recording device and the acoustic environment where the replay attack is simulated. Figure 1: Possible locations [ISO/IEC, 2016] to attack an ASV system. 1: microphone point, 2: transmission point, 3: override feature extractor, 4: modify features, 5: override classifier, 6: modify speaker database, 7: modify biometric reference, 8: modify score and 9: override decision.

  3. Figure 2: Difference between a genuine (bonafide) and a replayed speech. Therefore, in order to protect biometric systems from spoofing attacks, spoofing countermeasure solutions are often integrated within the verfication pipeline. And, voice spoofing countermeasures is currently an active research topic within the speech research community. The community driven automatic speaker verification and spoofing countermeasure challenges (ASVspoof) which is a biannual competition) held to promote anti-spoofing research to secure ASV systems. In the next article, Dr Bhusan Chettri will be talking more about how AI and big-data can be used to design anti-spoofing solutions in order to protect voice authentication systems from spoofing attacks. References [1] https://scholar.google.co.uk/citations?user=Ht6H2WgAAAAJ&hl=en [2] https://dblp.org/pid/194/1306.html [3] M. Sahidullah et. al. Introduction to Voice Presentation Attack Detection and Recent Advances, 2019. [4]. Bhusan Chettri. Voice biometric system security: Design and analysis of countermeasures for replay attacks. PhD thesis, Queen Mary University of London, August 2020. https://theses.eurasip.org/theses/866/voice-biometric-system-security-design-and/

More Related