S-DCCRN: Super Wide Band DCCRN with learnable complex feature for speech enhancement

Shubo Lv, Yihui Fu, Mengtao Xing, Jiayao Sun, Lei Xie, Jun Huang, Yannan Wang, Tao Yu
Audio, Speech and Language Processing Group (ASLP@NPU), Northwestern Polytechnical University, Xi'an, China
Tencent Ethereal Audio Lab, Tencent Corporation, Shenzhen, China

0. Contents

  1. Abstract
  2. Demos -- DNS-2021 blind test (musical)
  3. Demos -- DNS-2021 blind test (emotional)
  4. Demos -- DNS-2021 blind test (real recordings)


1. Abstract

In speech enhancement, complex neural network has shown promising performance due to their effectiveness in processing complex-valued spectrum. Most of the recent speech enhancement approaches mainly focus on wide-band signal with a sampling rate of 16K Hz. However, research on super wide band (e.g., 32K Hz) or even full-band (48K) denoising is still lacked due to the difficulty of modeling more frequency bands and particularly high frequency components. In this paper, we extend our previous deep complex convolution recurrent neural network (DCCRN) substantially to a super wide band version -- S-DCCRN, to perform speech denoising on speech of 32K Hz sampling rate. We first employ a cascaded sub-band and full-band processing module, which consists of two small-footprint DCCRNs -- one operates on sub-band signal and one operates on full-band signal, aiming at benefiting from both local and global frequency information. Moreover, instead of simply adopting the STFT feature as input, we use a complex feature encoder trained in an end-to-end manner to refine the information of different frequency bands. We also use a complex feature decoder to revert the feature to time-frequency domain. Finally, a learnable spectrum compression method is adopted to adjust the energy of different frequency bands, which is beneficial for neural network learning. The proposed model, S-DCCRN, has surpassed PercepNet as well as several competitive models and achieves state-of-the-art performance in terms of speech quality and intelligibility. Ablation studies also demonstrate the effectiveness of different contributions.




2. Demos -- DNS-2021 blind test (musical)

Models Sample 1 Sample 2 Sample 3 Sample 4
Noisy
RNNoise
DCCRN
SAF
S-DCCRN


3. Demos -- DNS-2021 blind test (emotional)

Models Sample 1 Sample 2 Sample 3 Sample 4
Noisy
RNNoise
DCCRN
SAF
S-DCCRN


4. Demos -- DNS-2021 blind test (real recordings)

Models Sample 1 Sample 2 Sample 3 Sample 4
Noisy
RNNoise
DCCRN
SAF
S-DCCRN