Open In Colab 要在GitHub上执行或查看/下载此笔记本

多麦克风波束成形

介绍

使用麦克风阵列在进行语音识别任务之前可以非常方便地提高信号质量(例如减少混响和噪声)。 麦克风阵列还可以估计声源的到达方向,这些信息随后可以用于“聆听”感兴趣声源的方向。

传播模型

我们假设声音的传播模型如下:

\(x_m[n] = h_m[n] \star s[n] + b_m[n]\),

其中 \(m\) 表示麦克风索引,\(n\) 表示采样索引,\(h_m\) 表示房间脉冲响应。表达式 \(s[n]\) 表示语音源的信号,\(b_m[n]\) 表示加性噪声,\(x_m[n]\) 表示在麦克风 \(m\) 处捕获的信号。这些信号也可以在频域中表示:

\(X_m(t,j\omega) = H_m(j\omega)S(t,j\omega) + B_m(t,j\omega)\),

或以向量形式:

\(\mathbf{X}(t,j\omega) = \mathbf{H}(j\omega)S(t,j\omega) + \mathbf{B}(t,j\omega)\).

注意 \(\mathbf{X}(t,j\omega) \in \mathbb{C}^{M \times 1}\).

在无回声的情况下,我们可以替换\(h_m[n] = a_m[n] = \delta(n-\tau_m)\),并且我们写成\(H_m(j\omega) = A_m(j\omega) = e^{-j\omega\tau_m}\),其中\(\tau_m\)是直接路径的时间延迟,以样本为单位,或以向量形式\(\mathbf{A}(j\omega) \in \mathbb{C}^{M \times 1}\)

协方差矩阵

我们还使用以下协方差矩阵与一些波束形成器:

\(\displaystyle\mathbf{R}_{XX}(j\omega) = \frac{1}{T}\sum_{t=1}^{T}\mathbf{X}(t,j\omega)\mathbf{X}^H(t,j\omega)\)

\(\displaystyle\mathbf{R}_{SS}(j\omega) = \frac{1}{T}\sum_{t=1}^{T}\mathbf{H}(j\omega)\mathbf{H}^H(j\omega)|S(t,j\omega)|^2\)

\(\displaystyle\mathbf{R}_{NN}(j\omega) = \frac{1}{T}\sum_{t=1}^{T}\mathbf{B}(t,j\omega)\mathbf{B}^H(t,j\omega)\)

在实践中,通常使用时频掩码来估计语音和噪声的协方差矩阵:

\(\displaystyle\mathbf{R}_{SS}(j\omega) \approx \frac{1}{T}\sum_{t=1}^{T}M_S(t,j\omega)\mathbf{X}(t,j\omega)\mathbf{X}^H(t,j\omega)\)

\(\displaystyle\mathbf{R}_{NN}(j\omega) \approx \frac{1}{T}\sum_{t=1}^{T}M_N(t,j\omega)\mathbf{X}(t,j\omega)\mathbf{X}^H(t,j\omega)\)

到达时间差

麦克风 \(1\)\(m\) 之间的到达时间差可以使用广义互相关相位变换 (GCC-PHAT) 通过以下表达式进行估计:

\(\displaystyle\tau_m = argmax_{\tau} \int_{-\pi}^{+\pi}{\frac{X_1(j\omega) X_m(j\omega)^*}{|X_1(j\omega)||X_m(j\omega)|}e^{j\omega\tau}}d\omega\)

到达方向

带相位变换的导向响应功率

SRP-PHAT 在阵列周围的虚拟单位球面上扫描每个潜在的到达方向,并计算相应的功率。对于每个DOA(由单位向量 \(\mathbf{u}\) 表示),在 \(\mathbf{u}\) 方向上有一个导向向量 \(\mathbf{A}(j\omega,\mathbf{u}) \in \mathbb{C}^{M \times 1}\)

\(\displaystyle E(\mathbf{u}) = \sum_{p=1}^{M}{\sum_{q=p+1}^{M}{\int_{-\pi}^{+\pi}{\frac{X_p(j\omega)X_q(j\omega)^*}{|X_p(j\omega)||X_q(j\omega)|}}}A_p(j\omega,\mathbf{u})A_q(j\omega,\mathbf{u})^* d\omega}\)

选择具有最大功率的DOA作为声音的DOA:

\(\mathbf{u}_{max} = argmax_{\mathbf{u}}{E(\mathbf{u})}\)

多重信号分类

MUSIC 扫描阵列周围虚拟单位球面上的每个潜在到达方向,并计算相应的功率。对于每个 DOA(由单位向量 \(\mathbf{u}\) 表示),在 \(\mathbf{u}\) 方向上有一个导向向量 \(\mathbf{A}(j\omega,\mathbf{u}) \in \mathbb{C}^{M \times 1}\)。矩阵 \(\mathbf{U}(j\omega) \in \mathbb{C}^{M \times S}\) 包含在对 \(\mathbf{R}_{XX}(j\omega)\) 进行特征分解时获得的 \(S\) 个最小特征值对应的 \(S\) 个特征向量。功率对应于:

\(\displaystyle E(\mathbf{u}) = \frac{\mathbf{A}(j\omega,\mathbf{u})^H \mathbf{A}(j\omega,\mathbf{u})}{\sqrt{\mathbf{A}(j\omega,\mathbf{u})^H \mathbf{U}(j\omega)\mathbf{U}(j\omega)^H\mathbf{A}(j\omega,\mathbf{u})}}\)

选择具有最大功率的DOA作为声音的DOA:

\(\mathbf{u}_{max} = argmax_{\mathbf{u}}{E(\mathbf{u})}\)

波束成形

我们在频域应用波束成形:\(Y(j\omega) = \mathbf{W}^H(j\omega)\mathbf{X}(j\omega)\)

延迟和求和

延迟求和波束形成器的目的是对齐语音信号以产生建设性干扰。选择系数使得:

\(\mathbf{W}(j\omega) = \frac{1}{M} \mathbf{A}(j\omega)\).

最小方差无失真响应

MVDR波束形成器具有以下系数:

\(\displaystyle\mathbf{W}(j\omega) = \frac{\mathbf{R}_{XX}^{-1}(j\omega)\mathbf{A}(j\omega)}{\mathbf{A}^H(j\omega)\mathbf{R}_{XX}^{-1}(j\omega)\mathbf{A}(j\omega)}\).

广义特征值

GEV波束形成器系数对应于从广义特征值分解获得的主成分,使得:

\(\mathbf{R}_{SS}(j\omega)\mathbf{W}(j\omega) = \lambda\mathbf{R}_{NN}(j\omega)\mathbf{W}(j\omega)\)

安装 SpeechBrain

首先安装SpeechBrain:

%%capture
# Installing SpeechBrain via pip
BRANCH = 'develop'
!python -m pip install git+https://github.com/speechbrain/speechbrain.git@$BRANCH

准备音频

然后我们将加载通过模拟空气中传播获得的4麦克风阵列的语音信号。我们还将加载扩散噪声(来自所有方向)和定向噪声(可以建模为空间中的点源)。这里的目标是将混响语音与噪声混合以生成噪声混合物,并测试波束形成方法以增强语音。

我们首先下载要使用的音频样本:

%%capture
!wget https://www.dropbox.com/s/0h414xocvu9vw96/speech_-0.82918_0.55279_-0.082918.flac
!wget https://www.dropbox.com/s/xlehxo26mnlkvln/noise_diffuse.flac
!wget https://www.dropbox.com/s/4l6iy5zc9bgr7qj/noise_0.70225_-0.70225_0.11704.flac

我们现在将加载音频文件:

import matplotlib.pyplot as plt
from speechbrain.dataio.dataio import read_audio

xs_speech = read_audio('speech_-0.82918_0.55279_-0.082918.flac') # [time, channels]
xs_speech = xs_speech.unsqueeze(0) # [batch, time, channels]
xs_noise_diff = read_audio('noise_diffuse.flac') # [time, channels]
xs_noise_diff = xs_noise_diff.unsqueeze(0) # [batch, time, channels]
xs_noise_loc = read_audio('noise_0.70225_-0.70225_0.11704.flac') # [time, channels]
xs_noise_loc =  xs_noise_loc.unsqueeze(0) # [batch, time, channels]
fs = 16000 # sampling rate

plt.figure(1)
plt.title('Clean signal at microphone 1')
plt.plot(xs_speech.squeeze()[:,0])
plt.figure(2)
plt.title('Diffuse noise at microphone 1')
plt.plot(xs_noise_diff.squeeze()[:,0])
plt.figure(3)
plt.title('Directive noise at microphone 1')
plt.plot(xs_noise_loc.squeeze(0)[:,0])
plt.show()

我们可以听到回响的语音:

from IPython.display import Audio
Audio(xs_speech.squeeze()[:,0],rate=fs)

我们现在将混响语音与噪声混合,以创建噪声多通道混合物:

ss = xs_speech
nn_diff = 0.05 * xs_noise_diff
nn_loc = 0.05 * xs_noise_loc
xs_diffused_noise = ss + nn_diff
xs_localized_noise = ss + nn_loc

我们可以查看噪声混合物:

plt.figure(1)
plt.title('Microphone 1 (speech + diffused noise)')
plt.plot(xs_diffused_noise.squeeze()[:,0])
plt.figure(2)
plt.title('Microphone 1 (speech + directive noise)')
plt.plot(xs_localized_noise.squeeze()[:,0])
plt.show()

我们可以听嘈杂的混合声音:

from IPython.display import Audio
Audio(xs_diffused_noise.squeeze()[:,0],rate=fs)
from IPython.display import Audio
Audio(xs_localized_noise.squeeze()[:,0],rate=fs)

处理

带相位变换的导向响应功率

STFT 将信号转换到频域,然后协方差将计算每个频率仓的协方差矩阵。SRP-PHAT 模块将返回到达方向。我们需要提供麦克风阵列的几何形状,在这个例子中是一个圆形阵列,四个麦克风均匀分布,直径为0.1米。系统估计每个STFT帧的DOA。在这个例子中,我们使用一个来自方向 \(x=-0.82918\), \(y=0.55279\)\(z=-0.082918\) 的声源。从结果中我们看到方向相当准确(由于球体离散化,存在轻微差异)。还要注意,由于所有麦克风都位于 \(xy\) 平面,系统无法区分正 \(z\) 轴和负 \(z\) 轴。

from speechbrain.dataio.dataio import read_audio
from speechbrain.processing.features import STFT
from speechbrain.processing.multi_mic import Covariance
from speechbrain.processing.multi_mic import SrpPhat

import torch

mics = torch.zeros((4,3), dtype=torch.float)
mics[0,:] = torch.FloatTensor([-0.05, -0.05, +0.00])
mics[1,:] = torch.FloatTensor([-0.05, +0.05, +0.00])
mics[2,:] = torch.FloatTensor([+0.05, +0.05, +0.00])
mics[3,:] = torch.FloatTensor([+0.05, +0.05, +0.00])

stft = STFT(sample_rate=fs)
cov = Covariance()
srpphat = SrpPhat(mics=mics)

Xs = stft(xs_diffused_noise)
XXs = cov(Xs)
doas = srpphat(XXs)

print(doas)

多重信号分类

STFT 将信号转换到频域,然后协方差将计算每个频率仓的协方差矩阵。MUSIC 模块将返回到达方向。我们需要提供麦克风阵列的几何形状,在这个例子中是一个均匀分布有四个麦克风的圆形阵列,直径为0.1米。系统估计每个STFT帧的DOA。在这个例子中,我们使用一个来自方向 \(x=-0.82918\)\(y=0.55279\)\(z=-0.082918\) 的声源。从结果中我们看到方向相当准确(由于球面离散化存在轻微差异)。还要注意,由于所有麦克风都位于 \(xy\) 平面上,系统无法区分正 \(z\) 轴和负 \(z\) 轴。

from speechbrain.dataio.dataio import read_audio
from speechbrain.processing.features import STFT
from speechbrain.processing.multi_mic import Covariance
from speechbrain.processing.multi_mic import Music

import torch

mics = torch.zeros((4,3), dtype=torch.float)
mics[0,:] = torch.FloatTensor([-0.05, -0.05, +0.00])
mics[1,:] = torch.FloatTensor([-0.05, +0.05, +0.00])
mics[2,:] = torch.FloatTensor([+0.05, +0.05, +0.00])
mics[3,:] = torch.FloatTensor([+0.05, +0.05, +0.00])

stft = STFT(sample_rate=fs)
cov = Covariance()
music = Music(mics=mics)

Xs = stft(xs_diffused_noise)
XXs = cov(Xs)
doas = music(XXs)

print(doas)

延迟求和波束成形

STFT 将信号转换到频域,然后协方差将计算每个频率仓的协方差矩阵。GCC-PHAT 模块将估计每个麦克风之间的到达时间差(TDOA),并使用此 TDOA 执行延迟求和。

被扩散噪声污染的语音

from speechbrain.processing.features import STFT, ISTFT
from speechbrain.processing.multi_mic import Covariance
from speechbrain.processing.multi_mic import GccPhat
from speechbrain.processing.multi_mic import DelaySum

import matplotlib.pyplot as plt
import torch

stft = STFT(sample_rate=fs)
cov = Covariance()
gccphat = GccPhat()
delaysum = DelaySum()
istft = ISTFT(sample_rate=fs)

Xs = stft(xs_diffused_noise)
XXs = cov(Xs)
tdoas = gccphat(XXs)
Ys_ds = delaysum(Xs, tdoas)
ys_ds = istft(Ys_ds)

plt.figure(1)
plt.title('Noisy signal at microphone 1')
plt.imshow(torch.transpose(torch.log(Xs[0,:,:,0,0]**2 + Xs[0,:,:,1,0]**2), 1, 0), origin="lower")
plt.figure(2)
plt.title('Noisy signal at microphone 1')
plt.plot(xs_diffused_noise.squeeze()[:,0])
plt.figure(3)
plt.title('Beamformed signal')
plt.imshow(torch.transpose(torch.log(Ys_ds[0,:,:,0,0]**2 + Ys_ds[0,:,:,1,0]**2), 1, 0), origin="lower")
plt.figure(4)
plt.title('Beamformed signal')
plt.plot(ys_ds.squeeze())
plt.show()

我们还可以监听波束成形信号并与噪声信号进行比较。

from IPython.display import Audio
Audio(xs_diffused_noise.squeeze()[:,0],rate=fs)
from IPython.display import Audio
Audio(ys_ds.squeeze(),rate=fs)

被定向噪声污染的语音

当我们有定向噪声时,这更加棘手,因为GCC-PHAT可以从噪声源捕获TDOA。目前,我们将简单地假设我们知道TDOA,但可以应用理想的二进制掩码来区分语音TDOA和噪声TDOA。

from speechbrain.processing.features import STFT, ISTFT
from speechbrain.processing.multi_mic import Covariance
from speechbrain.processing.multi_mic import GccPhat
from speechbrain.processing.multi_mic import DelaySum

import matplotlib.pyplot as plt
import torch

stft = STFT(sample_rate=fs)
cov = Covariance()
gccphat = GccPhat()
delaysum = DelaySum()
istft = ISTFT(sample_rate=fs)

Xs = stft(xs_diffused_noise)
XXs = cov(Xs)
tdoas = gccphat(XXs)

Xs = stft(xs_localized_noise)
XXs = cov(Xs)
Ys_ds = delaysum(Xs, tdoas)
ys_ds = istft(Ys_ds)

plt.figure(1)
plt.title('Noisy signal at microphone 1')
plt.imshow(torch.transpose(torch.log(Xs[0,:,:,0,0]**2 + Xs[0,:,:,1,0]**2), 1, 0), origin="lower")
plt.figure(2)
plt.title('Noisy signal at microphone 1')
plt.plot(xs_diffused_noise.squeeze()[:,0])
plt.figure(3)
plt.title('Beamformed signal')
plt.imshow(torch.transpose(torch.log(Ys_ds[0,:,:,0,0]**2 + Ys_ds[0,:,:,1,0]**2), 1, 0), origin="lower")
plt.figure(4)
plt.title('Beamformed signal')
plt.plot(ys_ds.squeeze())
plt.show()

我们还可以监听波束成形信号,并与噪声信号进行比较。

from IPython.display import Audio
Audio(xs_localized_noise.squeeze()[:,0],rate=fs)
from IPython.display import Audio
Audio(ys_ds.squeeze(),rate=fs)

最小方差无失真响应

STFT 将信号转换到频域,然后协方差将计算每个频率仓的协方差矩阵。GCC-PHAT 模块将估计每个麦克风之间的到达时间差(TDOA),并使用这个 TDOA 来执行 MVDR 波束成形。

被扩散噪声污染的语音

from speechbrain.processing.features import STFT, ISTFT
from speechbrain.processing.multi_mic import Covariance
from speechbrain.processing.multi_mic import GccPhat
from speechbrain.processing.multi_mic import Mvdr

import matplotlib.pyplot as plt
import torch

stft = STFT(sample_rate=fs)
cov = Covariance()
gccphat = GccPhat()
mvdr = Mvdr()
istft = ISTFT(sample_rate=fs)

Xs = stft(xs_diffused_noise)
Nn = stft(nn_diff)
NNs = cov(Nn)
XXs = cov(Xs)
tdoas = gccphat(XXs)
Ys_mvdr = mvdr(Xs, NNs, tdoas)
ys_mvdr = istft(Ys_mvdr)

plt.figure(1)
plt.title('Noisy signal at microphone 1')
plt.imshow(torch.transpose(torch.log(Xs[0,:,:,0,0]**2 + Xs[0,:,:,1,0]**2), 1, 0), origin="lower")
plt.figure(2)
plt.title('Noisy signal at microphone 1')
plt.plot(xs_diffused_noise.squeeze()[:,0])
plt.figure(3)
plt.title('Beamformed signal')
plt.imshow(torch.transpose(torch.log(Ys_mvdr[0,:,:,0,0]**2 + Ys_mvdr[0,:,:,1,0]**2), 1, 0), origin="lower")
plt.figure(4)
plt.title('Beamformed signal')
plt.plot(ys_mvdr.squeeze())
plt.show()
from IPython.display import Audio
Audio(xs_diffused_noise.squeeze()[:,0],rate=fs)
from IPython.display import Audio
Audio(ys_mvdr.squeeze(),rate=fs)

被定向噪声污染的语音

再次强调,当我们有定向噪声时,这更加棘手,因为GCC-PHAT可以从噪声源捕获TDOA。目前,我们简单地假设我们知道TDOA,但理想二值掩码可以应用于区分语音TDOA和噪声TDOA。

from speechbrain.processing.features import STFT, ISTFT
from speechbrain.processing.multi_mic import Covariance
from speechbrain.processing.multi_mic import GccPhat
from speechbrain.processing.multi_mic import Mvdr

import matplotlib.pyplot as plt
import torch

stft = STFT(sample_rate=fs)
cov = Covariance()
gccphat = GccPhat()
mvdr = Mvdr()
istft = ISTFT(sample_rate=fs)

Xs = stft(xs_diffused_noise)
Nn = stft(nn_loc)
XXs = cov(Xs)
NNs = cov(Nn)
tdoas = gccphat(XXs)

Xs = stft(xs_localized_noise)
Ys_mvdr = mvdr(Xs, NNs, tdoas)
ys_mvdr = istft(Ys_mvdr)

plt.figure(1)
plt.title('Noisy signal at microphone 1')
plt.imshow(torch.transpose(torch.log(Xs[0,:,:,0,0]**2 + Xs[0,:,:,1,0]**2), 1, 0), origin="lower")
plt.figure(2)
plt.title('Noisy signal at microphone 1')
plt.plot(xs_diffused_noise.squeeze()[:,0])
plt.figure(3)
plt.title('Beamformed signal')
plt.imshow(torch.transpose(torch.log(Ys_mvdr[0,:,:,0,0]**2 + Ys_mvdr[0,:,:,1,0]**2), 1, 0), origin="lower")
plt.figure(4)
plt.title('Beamformed signal')
plt.plot(ys_mvdr.squeeze())
plt.show()
from IPython.display import Audio
Audio(xs_localized_noise.squeeze()[:,0],rate=fs)
from IPython.display import Audio
Audio(ys_mvdr.squeeze(),rate=fs)

广义特征值波束成形

STFT 将信号转换到频域,然后协方差将计算每个频率仓的协方差矩阵。我们假设可以分别计算语音和噪声的协方差矩阵,并将其用于波束成形。协方差矩阵可以使用理想二元掩码进行估计。

被扩散噪声污染的语音

from speechbrain.processing.features import STFT, ISTFT
from speechbrain.processing.multi_mic import Covariance
from speechbrain.processing.multi_mic import Gev

import matplotlib.pyplot as plt
import torch

stft = STFT(sample_rate=fs)
cov = Covariance()
gccphat = GccPhat()
gev = Gev()
istft = ISTFT(sample_rate=fs)

Xs = stft(xs_diffused_noise)
Ss = stft(ss)
Nn = stft(nn_diff)
SSs = cov(Ss)
NNs = cov(Nn)
Ys_gev = gev(Xs, SSs, NNs)
ys_gev = istft(Ys_gev)

plt.figure(1)
plt.title('Noisy signal at microphone 1')
plt.imshow(torch.transpose(torch.log(Xs[0,:,:,0,0]**2 + Xs[0,:,:,1,0]**2), 1, 0), origin="lower")
plt.figure(2)
plt.title('Noisy signal at microphone 1')
plt.plot(xs_localized_noise.squeeze()[:,0])
plt.figure(3)
plt.title('Beamformed signal')
plt.imshow(torch.transpose(torch.log(Ys_gev[0,:,:,0,0]**2 + Ys_gev[0,:,:,1,0]**2), 1, 0), origin="lower")
plt.figure(4)
plt.title('Beamformed signal')
plt.plot(ys_gev.squeeze())
plt.show()
from IPython.display import Audio
Audio(xs_localized_noise.squeeze()[:,0],rate=fs)
from IPython.display import Audio
Audio(ys_gev.squeeze(),rate=fs)

被定向噪声污染的语音

from speechbrain.processing.features import STFT, ISTFT
from speechbrain.processing.multi_mic import Covariance
from speechbrain.processing.multi_mic import Gev

import matplotlib.pyplot as plt
import torch

stft = STFT(sample_rate=fs)
cov = Covariance()
gccphat = GccPhat()
gev = Gev()
istft = ISTFT(sample_rate=fs)

Xs = stft(xs_localized_noise)
Ss = stft(ss)
Nn = stft(nn_loc)
SSs = cov(Ss)
NNs = cov(Nn)
Ys_gev = gev(Xs, SSs, NNs)
ys_gev = istft(Ys_gev)

plt.figure(1)
plt.title('Noisy signal at microphone 1')
plt.imshow(torch.transpose(torch.log(Xs[0,:,:,0,0]**2 + Xs[0,:,:,1,0]**2), 1, 0), origin="lower")
plt.figure(2)
plt.title('Noisy signal at microphone 1')
plt.plot(xs_localized_noise.squeeze()[:,0])
plt.figure(3)
plt.title('Beamformed signal')
plt.imshow(torch.transpose(torch.log(Ys_gev[0,:,:,0,0]**2 + Ys_gev[0,:,:,1,0]**2), 1, 0), origin="lower")
plt.figure(4)
plt.title('Beamformed signal')
plt.plot(ys_gev.squeeze())
plt.show()
from IPython.display import Audio
Audio(xs_localized_noise.squeeze()[:,0],rate=fs)
from IPython.display import Audio
Audio(ys_gev.squeeze(),rate=fs)

引用SpeechBrain

如果您在研究中或业务中使用SpeechBrain,请使用以下BibTeX条目引用它:

@misc{speechbrainV1,
  title={Open-Source Conversational AI with {SpeechBrain} 1.0},
  author={Mirco Ravanelli and Titouan Parcollet and Adel Moumen and Sylvain de Langen and Cem Subakan and Peter Plantinga and Yingzhi Wang and Pooneh Mousavi and Luca Della Libera and Artem Ploujnikov and Francesco Paissan and Davide Borra and Salah Zaiem and Zeyu Zhao and Shucong Zhang and Georgios Karakasidis and Sung-Lin Yeh and Pierre Champion and Aku Rouhe and Rudolf Braun and Florian Mai and Juan Zuluaga-Gomez and Seyed Mahed Mousavi and Andreas Nautsch and Xuechen Liu and Sangeet Sagar and Jarod Duret and Salima Mdhaffar and Gaelle Laperriere and Mickael Rouvier and Renato De Mori and Yannick Esteve},
  year={2024},
  eprint={2407.00463},
  archivePrefix={arXiv},
  primaryClass={cs.LG},
  url={https://arxiv.org/abs/2407.00463},
}
@misc{speechbrain,
  title={{SpeechBrain}: A General-Purpose Speech Toolkit},
  author={Mirco Ravanelli and Titouan Parcollet and Peter Plantinga and Aku Rouhe and Samuele Cornell and Loren Lugosch and Cem Subakan and Nauman Dawalatabad and Abdelwahab Heba and Jianyuan Zhong and Ju-Chieh Chou and Sung-Lin Yeh and Szu-Wei Fu and Chien-Feng Liao and Elena Rastorgueva and François Grondin and William Aris and Hwidong Na and Yan Gao and Renato De Mori and Yoshua Bengio},
  year={2021},
  eprint={2106.04624},
  archivePrefix={arXiv},
  primaryClass={eess.AS},
  note={arXiv:2106.04624}
}