Visualizing the Training of a Generative Adversarial Network for Time Series

Reading Time: 4 minutes

 

Generative adversarial networks (GAN)  are a recent framework for data generation introduced by Goodfellow et al. (2014) [1].  For a given data set, (e.g. cat images, designs, face images or numerical data) a GAN can learn to estimate the underlying distribution of the data set and is able to generate new samples from this distribution, which may not be present in the given data set. Today, we will have a closer look at the training process and how the training can be visualized.

What is a GAN?
A GAN consists of two models which are trained simultaneously. A generative model that captures the data distribution, called the generator, and a discriminative model that estimates the probability that a generated data point comes from the original data distribution, called the discriminator. Typically, the generator and discriminator are multi-layer neural networks.

The training process of a GAN can be seen as a minimax two-player game, where the generator tries to maximize the probability of the discriminator making a mistake, i.e. the generator is trained to fool the discriminator by producing novel candidates that the discriminator cannot distinguish from the real data. Simultaneously, the discriminator has to
maximize the probability of assigning the correct class (generated or real data) to training examples and synthetic records from the generator.

The following diagram shows the basic training process of a GAN:

The generator takes a random noise as input, e.g. from a multivariate Gaussian, and maps this input to the data space. The generated samples and original samples from the data set are then passed as inputs to the discriminator. The discriminator then classifies the samples as real or fake and updates the GAN via backpropagation. It’s important to mention that we alternate between steps of training the discriminator and steps of training the generator. The discriminator weights are frozen during the generator training step in order to prevent the discriminator from always predicting “real”.

Visual Training Process

In the beginning of the training process, the generator network is initialized with random weights and is not able to produce reliable samples. In every training step, the generator receives a feedback of the discriminator and then starts to generate more and more realistic samples, until the discriminator can no longer distinguish between a sample from the generator and a sample from the original data set. This generation process can be visualized. Therefore we will have a closer look at an example of a GAN for time series. Let’s consider a data set of hourly day-ahead electricity prices of the German power market from SMARD as training data. All 24 prices of each hour of a day are simultaneously delivered in an daily auction at the European Power Exchange (EEX).

The distribution of these prices over a day exhibits typical stylized facts, such as a morning ramp-up when production starts or a evening ramp as people come home as well as different levels of autocorrelations.

We create a basic GAN in Keras and visualize the training of the GAN by plotting the samples from the generator (blue) in the same figure as the original samples (grey) and run the training.During the training, we can see that the generator adjusts the mapping of the noise input from the latent space to the data space. It does so in such a way that it adapts the stylized facts of the day-ahead price shapes, like the low price levels at the night or morning ramp up. The following image shows samples from the trained GAN.

 

At the same time, we see that the loss of the generator (blue) and the loss of the discriminator (yellow) declines, and they get closer to each other, as we can see in the following figure:

 

To wrap up, we have, in this blog post, introduced Generative Adversarial Networks and presented a visualization of the training process.

References:

[1] Goodfellow, I.J., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D.,Ozair, S., Courville, A.C., Bengio, Y.: Generative adversarial nets. In: Ad-vances in Neural Information Processing Systems 27. pp. 2672–2680 (2014), http://papers.nips.cc/paper/5423-generative-adversarial-nets

[2] Goodfellow, I.: Nips 2016 tutorial: Generative adversarial networks. arXiv preprintarXiv:1701.00160 (2016)

[3] Google developers: https://developers.google.com/machine-learning/gan

[4] Francois, C. (2017). Deep learning with Python.

Print Friendly, PDF & Email
+ posts