SOME OF NOTES FROM THE LECTURE

Deep Generative Modeling

Generative Modeling Goal: Take as input training samples from some distribution and learn a model that represents that distribution

latent variable: They are not directly observable, but they are the true underlying features or explanatory factors

Autoencoder: An autoencoder is a type of artificial neural network used to learn efficient codings of unlabeled data (unsupervised learning)

Autoencode

variational autoencoders(VAEs):

image-20231213201111453

Generative Adversarial Networks(GANs): GANs are a way to make a generative model by having two neural networks compete with each other.

Dropout Layer:

Double Descent:

Lipschitz:

d_{Y}(f(x_{1}),f(x_{2}))\leq Kd_{X}(x_{1},x_{2}).[3]

Neural Tangent Kernels:(Jacot et al. 2018) is a kernel to explain the evolution of neural networks during training via gradient descent.

Effective dimension:

Liquid Neural Network(LNN):A Liquid Neural Network is a time-continuous Recurrent Neural Network (RNN) that processes data sequentially, keeps the memory of past inputs, adjusts its behaviors based on new inputs, and can handle variable-length inputs to enhance the task-understanding capabilities of NNs.

reference

  1. What does it mean “having Lipschitz continuous derivatives”?
  2. Understanding Variational Autoencoders (VAEs)
  3. [详细解析深度学习中的 Lipschitz 条件](详细解析深度学习中的 Lipschitz 条件 - 知乎 (zhihu.com))
  4. Liquid Neural Networks: Definition, Applications, & Challenges