Deep Tensor Factorizations

Autoencoders that after fitting correspond to a tensor factorization.

Deep Tensor Factorization for Spatially-Aware Scene Decomposition

We propose a completely unsupervised method to understand audio scenes observed with random microphone arrangements by decomposing the scene into its constituent sources and their relative presence in each microphone. To this end, we formulate a neural network architecture that can be interpreted as a nonnegative tensor factorization of a multi-channel audio recording. By clustering on the learned network parameters corresponding to channel content, we can learn sources’ individual spectral dictionaries and their activation patterns over time. Our method allows us to leverage deep learning advances like end-to-end training, while also allowing stochastic minibatch training so that we can feasibly decompose realistic audio scenes that are intractable to decompose using standard methods. This neural network architecture is easily extensible to other kinds of tensor factorizations.

Work by Jonah Casebeer, Michael Colomb and Paris Smaragdis. Full paper pdf here.

Here are some samples from our three source separation experiments. All simulations take place in a reverberant room. For each scene we first decompose the scene using our deep tensor factorization and then cluster components with k-means. The results below are entirely unsupervised separations using either the k-means cluster assignments or the cluster centers.

Experiment One: Three Point Sources

Method. Source 1. Source 2 Source 3
Assignment Based
Center Based
Individual Sources

Experiment Two: Two Point Sources One Ambient

Method. Source 1. Source 2 Source 3
Assignment Based
Center Based
Individual Sources

Experiment Three: Three Point Sources with a Duplicate

Method. Source 1. Source 2
Assignment Based
Center Based
Individual Sources