Project Description
Generative adversarial networks (GANs) have become the state of the art of unsupervised learning, bringing significant improvements to artificial intelligence applications. However, the GAN algorithm does not often converge and there is no well-developed theoretical framework for the study of its convergence.
In this project, we will develop such a theoretical framework. GANs will be viewed as a gradient flow, i.e., a flow of probability measures, governed by a McKean-Vlasov stochastic differential equation (MV-SDE). Recent investigation of my group shows that the MV-SDE converges to an invariant distribution under mild conditions, which guarantees the convergence of GANs. Moreover, it suggests that unsupervised learning can be done alternatively by simulating the MV-SDE. The DLA student will be in charge of two main tasks: (i) Use Python to simulate the MV-SDE and compare the performance with the GAN algorithm; fine-tune the algorithm to achieve better performance than the GAN algorithm. (ii) Extend the gradient flow framework from GANs to other well-known variants (e.g., f-GAN, Wasserstein GAN), and design algorithms that achieve better performance.
Special Requirement
It is desirable, although not required, that the student has some background/experience in the Python programming language.听
Contact
-
Yu-Jui Huang (baby直播app)