MLSS-Indo-Lab-DL-NLP

From ade romadhony to Everyone: (9:59 AM)

  • 
Good morning all!


From Operator CLOVE 10 to Everyone: (10:00 AM)

  • 
yes


From Haruna Abdu to Everyone: (10:00 AM)

  • 
yes


From Paola to Everyone: (10:00 AM)

  • 
Yes


From Yusuf Brima to Everyone: (10:00 AM)

  • 
Hello Ade, we can here you


From Wawan Cenggoro to Everyone: (10:02 AM)

  • 
wrong screen


From ade romadhony to Everyone: (10:03 AM)

  • 
I am sorry, it seems that the video needs some times to be loaded
apologize for technical problem, kindly wait a few minutes


From Georgios to Everyone: (10:06 AM)

  • 
Good morning everyone from UK


From Katharina Kann to Everyone: (10:11 AM)

  • 
Sorry, everyone!


From ade romadhony to Everyone: (10:12 AM)

  • 
sorry everyone for technical glitches!


From ade romadhony to Everyone: (10:35 AM)

  • 
pleae give questions on rorcket chat, lecture14


From Alfi Yusrotis to Everyone: (10:37 AM)

  • 
may i ask in page 30
why in first row betty just related to betty , has…for john
has, and, john


From Wawan Cenggoro to Everyone: (11:38 AM)

  • 
I see, no wonder I can't find any work about that


From Wawan Cenggoro to Me: (Privately) (11:40 AM)

  • 
Hi, my email is wcenggoro@binus.edu. You can share the paper through my email. Thanks.


From Me to Wawan Cenggoro: (Privately) (11:43 AM)

  • 
i have sent my email to you. it is almost noon at my timezone. so, i will send you papers, later


From Wawan Cenggoro to Me: (Privately) (11:44 AM)

  • 
Thanks, I appreciate your kindness


From Lya Hulliyyatus Suadaa to Everyone: (11:45 AM)

  • 
Thank you for the answer


From Me to Wawan Cenggoro: (Privately) (11:45 AM)

  • 
thanks for your friendliness, too :)


From Tisa Siti Saadah to Everyone: (11:46 AM)

  • 
thank you Kath


From Georgios to Everyone: (11:46 AM)

  • 
Thank you


From Robby Hardi to Everyone: (11:46 AM)

  • 
Thank you Kath!


From LAKSMI ANINDYATI to Everyone: (11:46 AM)

  • 
Thank you Kath!


From Laras Gupitasari to Everyone: (11:46 AM)

  • 
Thank you


From Me to Everyone: (11:46 AM)

  • 
thank you Kath !


From ade romadhony to Everyone: (11:46 AM)

  • 
we will have break for 15 minutes, and then continue on the practical session :)


From Renny P. Kusumawardani to Everyone: (11:46 AM)

  • 
Thank you, Katharina and Bu Ade! :)


From Hendar to Everyone: (11:50 AM)

  • 
good job Katharina, thanks


From Wawan Cenggoro to Everyone: (12:05 PM)

  • 
Glad to finally see PyTorch :)


From ade romadhony to Everyone: (12:05 PM)

  • 
:D


From Georgios to Everyone: (12:06 PM)

  • 
:-)


From ade romadhony to Everyone: (12:07 PM)

  • 
the materials already uploaded on the website


From Wawan Cenggoro to Everyone: (12:16 PM)

  • 
Agree


From UNTARI NOVIA WISESTY to Everyone: (12:21 PM)

  • 
i'm sorry I'm new in pytorch, so I will ask a very simple question, what the difference between tensor and array? thank you
thank you genta


From Renny P. Kusumawardani to Everyone: (12:26 PM)

  • 
Could you please paste the link to the collab notebook here?


From ade romadhony to Everyone: (12:26 PM)

  • 
https://colab.research.google.com/drive/1tMQ4b_hf7YJ5qVumDoK_0T3BqKW7SzXE?usp=sharing


From Renny P. Kusumawardani to Everyone: (12:27 PM)

  • 
Thank you Bu Ade!


From ade romadhony to Everyone: (12:27 PM)

  • 
you're welcome :)


From Sari Dewi to Everyone: (12:28 PM)

  • 
About word representation in the early slide, which one is better? Or is there any guideline in which word representation is suitable for certain model model/task?


From Mawanda A to Everyone: (12:30 PM)

  • 
Should we always use object oriented programming with Pytorch?


From Rian Adam Rajagede to Everyone: (12:30 PM)

  • 
in PyTorch, what's the difference between nn.Embedding and nn.Linear?
it looks similar to me :/


From Renny P. Kusumawardani to Everyone: (12:31 PM)

  • 
nn.Embedding is basically a look-up table, which maps words to embedding vectors


From Lia Anggraini to Everyone: (12:31 PM)

  • 
how to deal with ambiguous words?


From Renny P. Kusumawardani to Everyone: (12:32 PM)

  • 
nn.Linear is a feed-forward linear NN


From Rian Adam Rajagede to Everyone: (12:33 PM)

  • 
@Renny hoo I see.. I thought we can make a nn.Embedding using nn.Linear, but I just realized that look-up table is more "larger" than a single linear layer
thank you


From Renny P. Kusumawardani to Everyone: (12:34 PM)

  • 
@Rian: you’re welcome! :)
Hope I’m not too far off :D


From Tisa Siti Saadah to Everyone: (12:38 PM)

  • 
yes


From UNTARI NOVIA WISESTY to Everyone: (12:38 PM)

  • 
perfect


From Wawan Cenggoro to Everyone: (12:39 PM)

  • 
can you make the table of content smaller?
perfect


From Anditya Arifianto to Everyone: (12:47 PM)

  • 
@Mawanda, about object oriented, if it is about model building, it is suggested to build models by defining subclass of nn.Module with both __init__ function and forward function inside it. with this you will have more options in how you want your model trained.


From Anditya Arifianto to Everyone: (12:47 PM)

  • 
However Pytorch also has the Sequential and Functional mode as well.


From Lya Hulliyyatus Suadaa to Everyone: (12:50 PM)

  • 
When using seed, we will get same result every running. But in some papers, the researcher usually show accuracy with ± standard deviation (from several running). What do you think? Is it common to use several seed in experiments? Or it’s okay to just show only accuracy value without deviation?


From Mawanda A to Everyone: (12:52 PM)

  • 
Thank you @Mr. Anditya and Genta


From Rian Adam Rajagede to Everyone: (12:52 PM)

  • 
I see you apply avg_pool to the embedding layer, why?


From Me to Everyone: (12:55 PM)

  • 
currently, i am a TF user. some of my friends said to me that TF is a static computational graph, therefore there are some problems that we have to deal with such as padding on a sentence. however, i see that you exercise still needs to do ‘padding’. so, can you discuss about this point?


From Renny P. Kusumawardani to Everyone: (12:57 PM)

  • 
Does padding in this case have anything to do with the batching?
Since lengths of sentences in a batch are not necessarily similar to each other


From Wawan Cenggoro to Everyone: (1:00 PM)

  • 
@teeradaj, I think padding is still needed in the current exercise because the model is a simple NN that requires a fixed input length.
Maybe you can explain more for the case of RNN. I think @teeradaj is confused with this


From Renny P. Kusumawardani to Everyone: (1:05 PM)

  • 
Thanks Genta! That helps clarify it for me :)


From Wawan Cenggoro to Everyone: (1:05 PM)

  • 
Where the input length is not fixed


From Me to Everyone: (1:05 PM)

  • 
yes, i am thinking about RNN case


From Wawan Cenggoro to Everyone: (1:05 PM)

  • 
sure


From Me to Everyone: (1:05 PM)

  • 
thank you very much guys :)


From Wawan Cenggoro to Everyone: (1:06 PM)

  • 
@teeradaj: for the RNN case, yes it is possible to not use padding using PyTorch


From Me to Everyone: (1:09 PM)

  • 
@Wawan even that RNN cases, do we still need to do ‘padding’ like in TF? are there any pros/cons?


From Renny P. Kusumawardani to Everyone: (1:09 PM)

  • 
As far as I know, for sequential models, you don’t really have to
E.g. also for LSTMs


From Wawan Cenggoro to Everyone: (1:10 PM)

  • 
@teeradaj: for RNN, sometimes people still use padding just for convenience, I believe


From Me to Everyone: (1:12 PM)

  • 
@wawan @renny thank you :)


From Renny P. Kusumawardani to Everyone: (1:13 PM)

  • 
@Teeradaj: my pleasure, hope it helps :)


From Me to Everyone: (1:13 PM)

  • 
@renny yes, it helps :)


From Wawan Cenggoro to Everyone: (1:13 PM)

  • 
Usually, no significant difference can be seen between using padding or not for RNN. But if you are a strict theorist, you might not like to use padding, because it introduces more noise in the data. Especially when you indeed is able to not use it when working with RNN.


From Renny P. Kusumawardani to Everyone: (1:14 PM)

  • 
@Genta: could you comment more on L1 and L2 losses?


From Me to Everyone: (1:15 PM)

  • 
@wawan thanks for your good point :)


From Wawan Cenggoro to Everyone: (1:16 PM)

  • 
you're welcome


From Rian Adam Rajagede to Everyone: (1:16 PM)

  • 
I just checked the RNN's doc https://pytorch.org/docs/stable/generated/torch.nn.RNN.html#torch.nn.RNN and it said that the input of RNN layer need to have the same size "seq_len"


From Rian Adam Rajagede to Everyone: (1:17 PM)

  • 
sorry, not needed, but by default is need to have the same size


From Hariyanti Binti Mohd Saleh to Everyone: (1:17 PM)

  • 
hi, boss Genta. I saw in GitHub there some implementation of coding in c++ extension . How to use that c++ extension in pytorch? Thanks..


From Renny P. Kusumawardani to Everyone: (1:17 PM)

  • 
@Genta: thanks so much!


From Wawan Cenggoro to Everyone: (1:22 PM)

  • 
@rian: I just checked it also, it seems that the RNN layer here is coded in the same way of Keras/TF way of thinking for convenient use for beginner. Thus, to actually have RNN that can accept variable length of input, we need to build the model by ourselves.


From Ebuka Oguchi to Everyone: (1:24 PM)

  • 
Would you advice someone already learning tensor flow to also learn pytorch or just stick to one framework and perfect it?


From Wawan Cenggoro to Everyone: (1:24 PM)

  • 
Did you usually use average pool to make a vector from Embedding Layer? Is there any other better techniques?


From Rian Adam Rajagede to Everyone: (1:24 PM)

  • 
@wawan, about custom RNN, sadly, that's true :)


From Renny P. Kusumawardani to Everyone: (1:24 PM)

  • 
Could you share how to use both Adam and SGD? Do you mean to use them simultaneously?


From Patrick to Everyone: (1:26 PM)

  • 
What is the benefit of using pytorch over BERT/transformer?


From Wawan Cenggoro to Everyone: (1:27 PM)

  • 
lol, i miss theano


From Hariyanti Binti Mohd Saleh to Everyone: (1:27 PM)

  • 
haha, Wawan.mee too


From Anditya Arifianto to Everyone: (1:28 PM)

  • 
honorable mention: Theano


From Rian Adam Rajagede to Everyone: (1:28 PM)

  • 
I use theano in my bachelor thesis :D 1 week code in theano, done in a half day using keras


From Hariyanti Binti Mohd Saleh to Everyone: (1:29 PM)

  • 
what an experience. Rian&boss Andit
bersusah2 dahulu..bersenang kemudian


From Anditya Arifianto to Everyone: (1:29 PM)

  • 
haha


From Renny P. Kusumawardani to Everyone: (1:30 PM)

  • 
I see… so one after the other. Thanks Genta!


From Rian Adam Rajagede to Everyone: (1:30 PM)

  • 
@Hariyanti, that's true, at least it helped me finish my bachelor :D