MLSS-Indo-GNN-Zoom

From Max Welling to Everyone: (5:57 PM)

  • 
Where shall I start today? Maybe finish my lecture from yesterday first?


From SAID AL FARABY to Everyone: (5:57 PM)

  • 
I think so


From Max Welling to Everyone: (5:57 PM)

  • 
Where did we end up yesterday?


From SAID AL FARABY to Everyone: (5:58 PM)

  • 
wait i’ll check the slide
Slide 30 Reparameterization trick


From Max Welling to Everyone: (6:00 PM)

  • 
ok!


From Me to Everyone: (6:00 PM)

  • 
where can we get the slide?


From Max Welling to Everyone: (6:00 PM)

  • 
I will share afterwards with Said and he can put it on the website


From Me to Everyone: (6:01 PM)

  • 
ok! thank you very much :)


From chris simon to Everyone: (6:10 PM)

  • 
Can we ask questions here?
on zoom I mean


From SAID AL FARABY to Everyone: (6:10 PM)

  • 
yes


From chris simon to Everyone: (6:10 PM)

  • 
cool


From Me to Everyone: (6:12 PM)

  • 
is the | | an absolute?


From chris simon to Everyone: (6:20 PM)

  • 
From your surjective diagram, z is not unique. Does it yield a bigger chance for posterior collapse?
gen


From Renny P. Kusumawardani to Everyone: (6:24 PM)

  • 
For the stochastic process, it is necessary to take samples. I’ve heard the term ‘sample-efficiency’. What does it mean and why does it matter?


From Me to Everyone: (6:26 PM)

  • 
At log | det gradient_x(z) |, the notation | | means an absolute, right?


From SAID AL FARABY to Everyone: (6:31 PM)

  • 
Sorry, I missed it. We think so: https://deepgenerativemodels.github.io/notes/flow/
you can rise your hand if you want to talk directly


From Me to Everyone: (6:40 PM)

  • 
thank you very much for your confirmation :)


From Yopi to Everyone: (6:47 PM)

  • 
I might have missed, what is the role of the state memory in the hybrid method?


From chris simon to Everyone: (6:47 PM)

  • 
Most of your recent works focus on the explicit functions for sample generation such as VAE. Is it because sample generation with implicit functions (e.g. Langevin dynamics+ energy based models) not working really well in practice? What is your take?


From Arian Prabowo to Everyone: (6:51 PM)

  • 
I see thank you very much!


From Wawan Cenggoro to Everyone: (6:56 PM)

  • 
The neural augmentation remind me your work titled "Semi-Supervised Learning with Deep Generative Models", which is a two layer VAE with distribution of y at the last VAE layer. Do you think combining it can be beneficial?
combining them I mean


From chris simon to Everyone: (6:57 PM)

  • 
I see thanks!


From Wawan Cenggoro to Everyone: (6:57 PM)

  • 
yes


From Ebuka Oguchi to Everyone: (6:58 PM)

  • 
Are there any measures to ensure these deep fakes images are not used for the wrong purposes?


From Wahyu Hidayat to Everyone: (7:23 PM)

  • 
wonderful


From chris simon to Everyone: (7:29 PM)

  • 
Wonderful!


From Renny P. Kusumawardani to Everyone: (7:39 PM)

  • 
Great advice!


From Arian Prabowo to Everyone: (7:47 PM)

  • 
That's a very good advise! thank you!


From Me to Everyone: (7:50 PM)

  • 
thank you so much !


From Robby Hardi to Everyone: (7:50 PM)

  • 
Great talk!