Introduction
In the early ’80s, the widespread use of the sampler revolutionized the way music is produced: besides hiring professional musicians, music producers have since been able to compose with sampled sounds. This has brought much flexibility for both drum and melody production, thanks to the various offline edition possibilities offered by such systems like pitch shifting, time stretching, looping and others.
Nowadays, many producers still rely on samplers for drums production, mainly due to the alwaysincreasing amount of samples libraries available for download. This has helped music production become increasingly accessible, even to newcomers with no or little notion in sound design. However, relying on samples has also some drawbacks. Indeed, producers now have to browse their vast collection of samples in order to find the ”right sound”. This process is often inefficient and timeconsuming. Kick drum datasets are usually unorganized with, for instance, samples gathered in a single folder, regardless of whether they sound ”bright” or ”dark”. As a result, many producers would rely only on a limited selection of their favourite sounds, which could hamper creativity.
Hence, a method allowing a comfortable and rich exploration of sounds becomes an essential requirement in music production, especially for nonexpert users. Numerous research efforts have been done in the domain of user experience in order to provide interfaces that enhance the fluidity of humanmachine interactions. As an example, synthesizers interfaces now often feature ”macro” controls that allow to tune a sound to one’s will quickly.
Another approach to tackle this problem is the use of Music Information Retrieval (MIR) to deal more efficiently with vast libraries of audio samples. MIR is an approach based on feature extraction: by computing a lot of audio features
[Peeters2004]over a dataset, one can define a perceptual similarity measure between sounds. Indeed, audio features are related to perceptual characteristics, and a distance between a combination of features is more relevant than a squared error between two waveforms. The combination of MIR with machine learning techniques appears natural in order to organize such audio libraries by allowing, for example, clustering or classification based on audio content. We can cite software such as AudioHelper’s Samplism, Sononym and Algonaut’s Atlas.
While such software only allows one to organize an existing database, we propose to use artificial intelligence to intuitively generate sounds, thus also tackling the problem of sound exploration. Only very recently, some machine learning models have been developed specifically for the problem of audio generation. These
generative models perform what we could define as synthesis by learning. They rely on generative modelling, which allows performing audio synthesis by learning while tackling the question of intuitive parameter control [Esling, Bitton, and others2018, Engel et al.2017].Generative models are a flourishing class of machine learning approaches whose purpose is to generate novel data based on the observation of existing examples [Bishop and Mitchell2014]
. The learning process consists of modelling the underlying (and unknown) probability distribution of the data based on samples.
Once the model is trained, it is then possible for a user to generate new samples at will. However, for the user to be active during the synthesis process and not only passively browsing the outputs of the system, we find crucial the requirement that the system should provide intuitive controls. To this end, we need a model that extracts a compact highlevel representation of the data. Then, by providing these simple highlevel controls to a user, the synthesis process can be guided by perceptual characteristics. A user would just have to explore a continuous and wellorganized parameter space to synthesize an infinite variety of sounds.
Our proposal
In this work, we describe a system that allows to create a controllable audio synthesis space so that we can use it to synthesize novel sounds in an intuitive manner. This system can be split into three components (Fig. 1):

A Conditional Wasserstein AutoEncoder (CWAE) which generates Melscaled spectrograms.

An extension of the MultiHead Convolutional Neural Network (MCNN) which reconstructs signal from Melscaled spectrograms.

A Max4Live plugin allowing users to interact with the model in a music production environment.
In the remainder of this document, we first provide a state of the art on Wasserstein autoencoders and MCNN. Then we describe our model and the data we used to train it. We discuss reconstruction and generation results. Finally, we showcase the associated plugin and explain how it could change the way drum tracks are produced.
Related work
Generative models on audio waveforms
A few systems based on generative models have been recently proposed to address the learning of latent spaces for audio data. The Wavenet autoencoder [Engel et al.2017] combines Wavenet [Oord et al.2016] with autoencoders and uses dilated convolutions to learn waveforms of musical instruments. By conditioning the generation on the pitch, such a system is capable of synthesizing musical notes with various timbres. The WaveGAN [Donahue, McAuley, and Puckette2018] uses Generative Adversarial Networks (GANs) to generate drum sounds or bird vocalizations by directly learning on waveform. However, the GAN approach provides little control over the generation because it is still difficult to structure their latent space.
Generative models on spectral representations
Other works have focused on generating sound as spectrograms, a complex timefrequency representation of sound. This visual representation of sound intensity through time allows us to treat sounds like images, but has to reverted back to the signal domain to produce sound. In [Esling, Bitton, and others2018] uses VAEs to learn a generative space where instrumental sounds are organized with respect to their timbre. However, because the model is trained on spectra frames, it lacks temporal modeling. This hampers the capacity of the model to easily allow users to generate evolving structured temporal sequences such as drum sounds. This approach introduced in [Donahue, McAuley, and Puckette2018] takes into account these temporal dependencies by proposing SpecGAN, a generative models which uses GANs to generate spectrograms as if they were images.
Spectrogram inversion
Working with neural networks often forces us to discard the phase information of a spectrogram. Therefore, one cannot use the inverse Fourier transform to retrieve the signal it originates from. With classic STFT, a common workaround is to use the GriffinLim Algorithm (GLA)
[Griffin and Lim1984] which allows to estimate the missing phase information. Also, The Multihead Convolutional Neural Network (MCNN) is a model that inverts STFTs [Arık, Jun, and Diamos2019] using neural networks.However, STFT is not the best transform for our purpose. Indeed, Melscaled spectrograms are known to be more suitable for training convolutional neural networks [Huzaifah2017]. Melscaled spectrograms are computed with filters based on the Mel scale, a perceptual frequency scale that tries to mimic the human perception of pitches.
Despite being more adapted for training, using Melscaled spectrograms introduces a problem: they are not invertible and GLA cannot be used. Therefore, some deep learning based models have been developed in order to estimate signal from noninvertible spectrograms. In
[Prenger, Valle, and Catanzaro2018], the authors present WaveGlow, a flowbased network capable of generating high quality speech from Mel spectrograms. Also, in [Huang et al.2018], the authors use a conditioned Wavenet to estimate signal from ConstantQ Transforms, another noninvertible transform.Proposed model
Our model is composed of two components: a generative model on spectrograms, whose role is to learn a latent space from our dataset and to generate meaningful spectrograms from this space, and a spectrogram inversion model, whose role is reconstruct waveforms from our generated spectrograms.
Preliminaries on variational autoencoders
To formalize our problem, we rely on a set of data lying in a highdimensional space . We assume that these examples follow an underlying probability distribution that is unknown. Our goal is to train a generative model able to sample from this distribution.
We consider a parametrized latent variable model
by introducing latent variables lying in a space of smaller dimensionality than () and distributed according to the prior . We are interested in finding the parameter that maximizes the likelihood
of the dataset. However, for usual choices of the conditional probability distributions
(typically a deep neural network), this quantity is intractable.The variational autoencoder (VAE) [Kingma and Welling2013] is a model that introduces a variational approximation to the intractable posterior (the approximate posterior
is often chosen as a parametrized family of diagonal Gaussian distributions). The network
is called the encoder whose aim is to produce latent codes given while the network is called a decoder, which tries to reconstruct given a latent code .The introduction of the variational approximation of the posterior allows us to obtain the following lower bound (called ELBO for Evidence Lower BOund) over the intractable likelihood:
(1) 
where
denotes the KullbackLeibler divergence
[Cover and Thomas2012].
The first term is the likelihood of the data generated from the set of latent variable coming from the approximate posterior. Maximizing this quantity can be seen as minimizing a reconstruction error.

The second term is the distance between and and can be interpreted as a regularization term.
In [Sohn, Lee, and Yan2015], the authors add a conditioning mechanism to the original VAE which consists in conditioning all three networks , and on some metadata (in most cases, the prior does not depend on ).
However, a known problem of VAEs is that they tend to generate blurry samples and reconstructions [Chen et al.2016]. This becomes a major hindrance in the context of spectrogram reconstructions. Hopefully, this problem can be overcome by the use of Wasserstein AutoEncoders (WAEs) instead of VAEs. The main difference consists in replacing the term in (1) by another divergence between the prior and the aggregated posterior . In particular, the MMDWAE considers a Maximum Mean Discrepancy (MMD) [Berlinet and ThomasAgnan2011] distance defined as follows:
(2) 
where is an positivedefinite reproducing kernel and the associated Reproducing Kernel Hilbert Space (RKHS) [Berlinet and ThomasAgnan2011]
. MMD is known to perform well when matching highdimensional standard normal distributions
[Tolstikhin et al.2017, Gretton et al.2012]. Since the MMD distance is not available in closed form, we use the following unbiased Ustatistic estimator [Gretton et al.2012] for a batch size and a kernel :(3) 
with where and where .
The Conditional WAE
We now introduce a Conditional WAE (CWAE) architecture so that we can generate spectrograms depending on additional metadata such as the category of the original sound (e.g. kick drum, snare, clap, etc.).
Our encoder is defined as a Convolutional Neural Network (CNN) with
layers of processing. Each layer is a 2dimensional convolution followed by conditional batch normalization
[Perez et al.2017, Perez et al.2018]and a ReLU activation. This CNN block is followed by FullyConnected (FC) layers, in order to map the convolution layers activation to a vector of size
which is that of the latent space. The decoder network is defined as a mirror to the encoder, so that they have a similar capacity. Therefore, we move the FC block before the convolutional one and change the convolution to a convolutiontranspose operation. Also, we slightly adjust the convolution parameters so that the output size matches that of the input.Our convolutional blocks are made of 3 layers each, with a kernel size of (11,5), a stride of (3,2) and a padding of (5,2). Our FC blocks are made of 3 layers with sizes 1024, 512 and
. Therefore, our latent space is of size .In the case of WAEs, the MMD is computed between the prior and the aggregated posterior . As a result, the latent spaces obtained with WAEs are often really Gaussian which makes them easy to sample. Here, the conditioning mechanism implies that we use separated gaussian priors for each class
, in order to be able to sample all classes as Gaussian. Indeed, computing a MMD loss over all classes would force the global aggregated posterior to match the gaussian prior, and thus restrict the freedom for latent positions. Therefore, we have to compute the perclass MMD to backpropagate on.
Let’s formalize this problem by decomposing our dataset into subsets with , containing all elements from a single class. We define . Thus, our regularizer is computed as follows :
(4) 
Finally, our loss function is computed as:
(5) 
where and is the multiquadratics kernel as for CelebA in [Tolstikhin et al.2017].
MCNN inversion
To invert our Melspectrograms back to the signal domain, we use a modified version of the original MCNN. In this section, we first review the original MCNN before detailing how we adapted it to handle Melspectrograms of drum samples.
MCNN is composed of multiple heads that process STFTs (Fig. 2). These heads are composed of processing layers combining 1D transposed convolutions and Exponential Linear Units (ELUs). The convolution layers are defined by a set of parameters , respectively the filter width, the stride and the number of output channels. We multiply the output of every head with a trainable scalar to weight these outputs, and we compute the final waveform as their sum. Lastly, we scale the waveform with a nonlinearity (scaled softsign). The model is trained to estimate a waveform which spectrogram matches the original one. For more implementation details, we refer the interested readers to the original article.
We have chosen to use this model because of three main points. First, it performs a fast (300x realtime) and precise estimation of a signal given a spectrogram. Then, it can deal with noninvertible transforms that derive from STFT such as MelSTFT. Finally, its feedforward architecture allows to takes advantage of GPUs, unlike iterative or autoregressive models.
In our implementation, we kept most of the parameters suggested in [Arık, Jun, and Diamos2019].We use a MCNN with 8 heads of layers each where, for each layer , , we have . However, because we have padded our signals with zeros to standardize their length, two problems appear. First, we observed that the part of the spectrogram corresponding to the padding (made of zeros) was not well reconstructed if convolution feature biases. Without biases, zeros stay zeros throughout the kernel multiplications. Therefore, we removed all biases. Then, we observed a leakage phenomenon: because the convolution filters are quite large (length 13), the reconstructed waveform had more nonzero values than the original one. Therefore, the loss is lowerbounded by this effect. To tackle this problem, we decided to apply a mask to the final output of our model, aiming at correcting this leakage. Thus, for the number of output channels for layer , we have :
The output of head h is a couple of 2 vectors . We estimate the mask as follows:
(6) 
The finally output waveform is computed as :
(7)  
(8) 
To train the mask, we use supervised training and introduce a loss term between the original mask and the estimated one , that we name mask loss:
(9) 
At generation time the mask is binarized. This solution has worked very well to cut the tail artifacts introduced by the convolutions.
A second change is that we now train MCNN on Melscaled spectrograms rather than STFT. However, original losses were computed on STFT. To turn a STFT into a Melscaled spectrogram, we compute a filterbank matrix to combine the 2048 FFT bins into 512 Melfrequency bins. Finally, we multiply this matrix with the STFT to retrieve a Melscaled spectrogram:
(10) 
Therefore, we can simply convert all STFTs to Melscaled spectrograms before the loss computation. This does not affect the training procedure: backpropagation remains possible since this conversion operation is differentiable.
In addition, we have modified the loss function. When training the original model on our data, we noticed some artifacts that we identified as ’checkerboard artifacts’. These are known to appear when using transposed convolutions [Odena, Dumoulin, and Olah2016]. We have tried known workarounds such as NNResize Convolutions [Aitken et al.2017] but it did not yield better results. We empirically realized that, in our particular case, removing the phaserelated loss terms helped reducing these artifacts. Therefore, we removed from [Arık, Jun, and Diamos2019] the instantaneous frequency loss and the weighted phase loss terms while keeping the Spectral Convergence (SC) term:
(11) 
where is the Frobenius norm over time and frequency, and the Logscale MELmagnitude loss ():
(12) 
where is the norm and is a small number.
Finally, our global loss term is:
(13) 
where and are constants used for weighting loss terms. In our experiments, we set , which works well in practice.
Experiments
Dataset
We built a dataset of drums samples coming from various sample packs that we have bought (Vengeance sample packs and others). Overall, we collected more than 40,000 samples across 11 drum categories. All sounds are WAV audio files PCMcoded in 16 bits and sampled at 22050 Hz. Sounds that were longer than 1 second were removed in order to obtain a homogeneous set of audio samples.
After this preprocessing, the final dataset contains 11 balanced categories (kicks, claps, snares, open and closed hihats, tambourines, congas, bongos, shakers, snaps and toms) with 3000 sounds each for a total of 33000 sounds. All sounds in the dataset have a length between 0.1 and 1 second (mean of 0.46 second). In order to validate our models, we perform a classbalanced split between 80% training and 20% validation sets. All the results we present are computed on this validation set to ensure generalization.
As said in previous sections, we compute the Melscaled spectrograms of these sounds. To do so, we first pad all waveforms with zeros to ensure a constant size among the whole dataset. Thus, all audio files are 22015 samples long. We also normalize them so that the maximum absolute value of samples is 1. Then, we compute STFTs for all sounds with a Hann window with a length of 1024, a hop size of 256 and an FFT size of 2048. To turn the STFTs into Melscaled spectrograms, we multiply the STFTs with the filterbank matrix we mentioned earlier (Eq. 10).
Experimental setup
Before assembling the two parts of our model to create an endtoend system, we pretrain each network separately.
We train our CWAE with an ADAM optimizer [Kingma and Ba2014]. The initial learning rate is set to
and is annealed whenever the validation loss has not decreased for a fixed number of epochs. The annealing factor is set to 0.5 and we wait for 10 epochs. The WAE is trained for 110k iterations. To obtain a good estimation of the MMD between each
and their Gaussian prior, we have to compute enough . Indeed, it is said in [Reddi et al.2015] that in equation 3 should be the same order of magnitude as = 64. Therefore, at each iteration, we have to ensure that this criterion is satisfied for each class. We then implemented a balanced sampler, for our data loader to yield balanced batches containing 64 samples for each class. It ensures more stability than a standard random batch sampler. In the end, our final batch size equals .When training the CWAE, we perform some data processing steps that allow greater stability and performance. First, we compute the log of our spectrograms to reduce the contrast between high and low amplitudes. Then, we compute the perelement means and variances to scale the logMel spectrograms so that each element is distributed as a zeromean unitvariance Gaussian. Indeed, we have noticed that it improves the WAE reconstruction quality.
When training the MCNN, we use the Mel spectrograms without scaling. The initial learning rate is set to and is annealed by a scheduler at a rate of 0.2 with a patience of 50 epochs. The MCNN is trained for around 50k iterations, with a batch size of 128.
Reconstruction
We first evaluate the reconstruction abilities of each part of our system, and the system as a whole. On figure 3, we compare the original spectrogram with both our CWAE’s reconstruction and the spectrogram computed on the final output. In both cases, the reconstruction performed by the CWAE is good yet a bit blurry. After passing through the MCNN, we can see some stripes, corresponding to some checkerboard artifact, which periodically affects the waveform. Thus, this appears as a harmonic artifact on the spectrogram. While appearing important on these spectrograms because of the log, the sound is often clean, as shown on the kick reconstruction on figure 4.
More examples are available on the companion website^{1}^{1}1https://anonymous9123.github.io/icccndm, along with audio.
Sampling the latent space
On figure 6, we show generated sounds. We generate them by first sampling a multivariate Gaussian in the latent space. Then, we decode this latent code, conditioned on a given class label and obtain a spectrogram. Finally, this spectrogram is passed to the MCNN which estimates the corresponding waveform. Here, both these sounds are pretty realistic and artifact free. However, sampling the latent space in this fashion does not always yield good sounding results. This is because our latent distributions do not really match Gaussian distributions. Also, conditioning on a category does not ensure to generate sounds from this category only. Indeed, some regions of the space will sound close to a hihat, even if the class label for claps, is provided to the CWAE. While this can be seen as a drawback, we think that this does not lower the interest because it allows synthesizing hybrid sounds. You can hear additional audio examples on the companion website.
Creative Applications
Interface
For our model to be used in a studio production context, we have developed a user interface. This interface is a Max4Live patch which allows a direct integration into Ableton Live. In this section, we describe how it works and show some screenshots.
To recall, we pass a (latent code, category) couple to the decoder of our CWAE to produce a spectrogram . Then the MCNN generates a .wav file from this spectrogram. However, the latent code
is high dimensional (64 dimensions), so choosing a value for each parameter would be a long and complex process. To facilitate interactivity, we decided to use a Principal Components Analysis (PCA) which aim is to find the 3 most influential dimensions, thus reducing the complexity of the fine tuning process while ensuring a good diversity in sounds. From now on, we denote the PCA dimensions
, and .To generate sound through the interface, we provide controllers: First, we provide control over the values for : an XY pad allows to control and and the ’Fine’ knob provides control over . Also, a selector allows the user to define the range of both the pad and the knob. Then, a menu allows the user to set a value for which comes down to selecting the type of sounds one wants to generate. Finally, the user can use the waveform visualizer to crop out remaining artifacts for example.
Generation Process
Every time a parameter value changes, a new sound is generated as follows. A python server is listening on a UDP port. This server contains the model and will be in charge of all the computation. When the user modifies the value of a dimension, the Max client sends a message via UDP. This message contains the values for , , , and the category of the sound. When the server receives the message, it creates the associated latent code by computing the inverse PCA of and concatenate it with the conditioning vector. Then the server passes to the CWAE decoder which feeds a spectrogram to the MCNN. The obtained waveform is then exported to a WAV file, and its location is returned to the Max plugin. Finally, our plugin loads its buffer with the content of this file and displays it on the visualizer.
Our system can generate sounds with very low latency on CPU (50ms delay between the change and the sound with a 2,6 GHz Intel Core i7). Once the sound is in the buffer, it can be played without any latency. A demonstration video is available on the companion website.
Impact on creativity and music production
We think that this system is a first approach towards a new way to design and compose drums. Indeed, it is a straightforward and efficient tool for everyone to organize and browse their sample library and design their drum sounds. Despite the parameters being autonomously learnt by the neural network, it is pretty intuitive to navigate in the latent space.
Also, such a tool can be used to humanize programmed drums. It is often claimed that programmed electronic drums lack a human feeling. Indeed, when a real drummer plays, subtle variations give the rhythm a natural groove whereas programmed MIDI drum sequences can sound robotic and repetitive, leaving listeners bored. There are common techniques to humanize MIDI drums such as varying velocities. By allowing the synthesis parameters to vary in a small given range, our system can be used to slightly modify the sound of a drum element throughout a loop. This could, for example, mimic a drummer who hits a snare at slightly different positions.
Conclusion and Future Work
We propose a first endtoend system that allows intuitive drum sounds synthesis. The latent space learnt on the data provides intuitive controls over the sound. Our system is capable of realtime sound generation on CPU while ensuring a satisfying audio quality. Moreover, the interface we have developed is studioready and allows users to easily integrate it into one of the most used DAWs for electronic music. We identify two axes for improvement: The first one is about the conditioning mechanism that should be more precise and powerful so that each category can clearly be distinguished from the others. The other axis is about developing novel ways to interact with a large latent space to explore its full diversity. Also, similarly to what is achieved on symbolic music [Engel, Hoffman, and Roberts2017, Hadjeres2019], we will investigate approaches that let the users specify the controls they want to shape the sounds. This would be an effortless way for novice sound designers to tune their drum sounds and create drum kits on purpose, rather than relying on existing ones. Also, to merge the computation server into the plugin is a required feature for the model to be even more accessible.
References
 [Aitken et al.2017] Aitken, A.; Ledig, C.; Theis, L.; Caballero, J.; Wang, Z.; and Shi, W. 2017. Checkerboard artifact free subpixel convolution: A note on subpixel convolution, resize convolution and convolution resize. arXiv preprint arXiv:1707.02937.
 [Arık, Jun, and Diamos2019] Arık, S. Ö.; Jun, H.; and Diamos, G. 2019. Fast spectrogram inversion using multihead convolutional neural networks. IEEE Signal Processing Letters 26(1):94–98.
 [Berlinet and ThomasAgnan2011] Berlinet, A., and ThomasAgnan, C. 2011. Reproducing kernel Hilbert spaces in probability and statistics. Springer Science & Business Media.
 [Bishop and Mitchell2014] Bishop, C. M., and Mitchell, T. M. 2014. Pattern recognition and machine learning.
 [Chen et al.2016] Chen, X.; Kingma, D. P.; Salimans, T.; Duan, Y.; Dhariwal, P.; Schulman, J.; Sutskever, I.; and Abbeel, P. 2016. Variational lossy autoencoder. arXiv preprint arXiv:1611.02731.
 [Cover and Thomas2012] Cover, T. M., and Thomas, J. A. 2012. Elements of information theory. John Wiley & Sons.
 [Donahue, McAuley, and Puckette2018] Donahue, C.; McAuley, J.; and Puckette, M. 2018. Adversarial audio synthesis. arXiv preprint arXiv:1802.04208.
 [Engel et al.2017] Engel, J.; Resnick, C.; Roberts, A.; Dieleman, S.; Eck, D.; Simonyan, K.; and Norouzi, M. 2017. Neural audio synthesis of musical notes with wavenet autoencoders. arXiv preprint arXiv:1704.01279.
 [Engel, Hoffman, and Roberts2017] Engel, J.; Hoffman, M.; and Roberts, A. 2017. Latent constraints: Learning to generate conditionally from unconditional generative models. CoRR abs/1711.05772.
 [Esling, Bitton, and others2018] Esling, P.; Bitton, A.; et al. 2018. Generative timbre spaces with variational audio synthesis. arXiv preprint arXiv:1805.08501.
 [Gretton et al.2012] Gretton, A.; Borgwardt, K. M.; Rasch, M. J.; Schölkopf, B.; and Smola, A. 2012. A kernel twosample test. Journal of Machine Learning Research 13(Mar):723–773.
 [Griffin and Lim1984] Griffin, D., and Lim, J. 1984. Signal estimation from modified shorttime fourier transform. IEEE Transactions on Acoustics, Speech, and Signal Processing 32(2):236–243.
 [Hadjeres2019] Hadjeres, G. 2019. Variation network: Learning highlevel attributes for controlled input manipulation. arXiv preprint arXiv:1901.03634.
 [Huang et al.2018] Huang, S.; Li, Q.; Anil, C.; Bao, X.; Oore, S.; and Grosse, R. B. 2018. Timbretron: A wavenet (cyclegan (cqt (audio))) pipeline for musical timbre transfer. arXiv preprint arXiv:1811.09620.
 [Huzaifah2017] Huzaifah, M. 2017. Comparison of timefrequency representations for environmental sound classification using convolutional neural networks. arXiv preprint arXiv:1706.07156.
 [Kingma and Ba2014] Kingma, D. P., and Ba, J. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980.
 [Kingma and Welling2013] Kingma, D. P., and Welling, M. 2013. AutoEncoding Variational Bayes.
 [Odena, Dumoulin, and Olah2016] Odena, A.; Dumoulin, V.; and Olah, C. 2016. Deconvolution and checkerboard artifacts. Distill 1(10):e3.
 [Oord et al.2016] Oord, A. v. d.; Dieleman, S.; Zen, H.; Simonyan, K.; Vinyals, O.; Graves, A.; Kalchbrenner, N.; Senior, A.; and Kavukcuoglu, K. 2016. Wavenet: A generative model for raw audio. arXiv preprint arXiv:1609.03499.
 [Peeters2004] Peeters, G. 2004. A large set of audio features for sound description (similarity and classification) in the cuidado project.
 [Perez et al.2017] Perez, E.; De Vries, H.; Strub, F.; Dumoulin, V.; and Courville, A. 2017. Learning visual reasoning without strong priors. arXiv preprint arXiv:1707.03017.
 [Perez et al.2018] Perez, E.; Strub, F.; De Vries, H.; Dumoulin, V.; and Courville, A. 2018. Film: Visual reasoning with a general conditioning layer. In ThirtySecond AAAI Conference on Artificial Intelligence.
 [Prenger, Valle, and Catanzaro2018] Prenger, R.; Valle, R.; and Catanzaro, B. 2018. Waveglow: A flowbased generative network for speech synthesis. arXiv preprint arXiv:1811.00002.
 [Reddi et al.2015] Reddi, S.; Ramdas, A.; Póczos, B.; Singh, A.; and Wasserman, L. 2015. On the high dimensional power of a lineartime two sample test under meanshift alternatives. In Artificial Intelligence and Statistics, 772–780.
 [Sohn, Lee, and Yan2015] Sohn, K.; Lee, H.; and Yan, X. 2015. Learning structured output representation using deep conditional generative models. In Advances in Neural Information Processing Systems, 3483–3491.
 [Tolstikhin et al.2017] Tolstikhin, I.; Bousquet, O.; Gelly, S.; and Schoelkopf, B. 2017. Wasserstein autoencoders. arXiv preprint arXiv:1711.01558.
Comments
There are no comments yet.