Seminars

Research and development - Seminars

Principled Training of Generative Adversarial:

Networks with Wasserstein metric and Gradient Penalty

Generative Adversarial Networks have, since their conception, revolutionised the field of Generative Modeling. Learning probability distributions is often a problem of interest and generating synthetic data is also useful; media is currently undergoing a silent paradigm shift brought about by generative modelling, for example. However, in their original form, adversarial networks suffer from several issues such as great instability, non-convergence and mode collapse. Several empirical facts have been identified to improve training but none of them had been theoretically backed until the Wasserstein GAN made an appearance. In the present work, we show mathematical principles for improved training of generative adversarial networks, these methods range from modifying the original value function for a more appropriate (weaker) metric (Wasserstein) and gradient penalization, which enforces necessary Lipschitz conditions. We show results on toy data that illustrate them and finally present some applications to real world data.

Details:

Exhibitor:

Gabriel Alfonso Patron Herrera

Date:

July 09, 2020

Play Video

Principled Training of Generative Adversarial

YouTube – Quantil Matemáticas Aplicadas

Attachments

Newsletter

Get information about Data Science, Artificial Intelligence, Machine Learning and more.