Investigación y desarrollo · Seminarios
Generative Adversarial Networks have, since their conception, revolutionised the field of Generative Modeling. Learning probability distributions is often a problem of interest and generating synthetic data is also useful; media is currently undergoing a silent paradigm shift brought about by generative modelling, for example. However, in their original form, adversarial networks suffer from several issues such as great instability, non-convergence and mode collapse. Several empirical facts have been identified to improve training but none of them had been theoretically backed until the Wasserstein GAN made an appearance. In the present work, we show mathematical principles for improved training of generative adversarial networks, these methods range from modifying the original value function for a more appropriate (weaker) metric (Wasserstein) and gradient penalization, which enforces necessary Lipschitz conditions. We show results on toy data that illustrate them and finally present some applications to real world data.
Obtén información sobre Ciencia de datos, Inteligencia Artificial, Machine Learning y más.