MLSS-Indo-Lab-Transfer-NLP
From ade romadhony to Everyone: (9:58 AM)
Hello all, good morning!
From Anung Ariwibowo to Everyone: (9:59 AM)
Good morning bu Ade
From Bagja 9102 Kurniawan to Everyone: (9:59 AM)
hello, good morning bu Ade
From FIF - Agung Toto Wibowo - 06810035 to Everyone: (9:59 AM)
Hello. good morning. I cannot hear anything. is there some playable instrument?
From ade romadhony to Everyone: (10:00 AM)
I'm sorry, we don't have it :) ok, I'll start
From FIF - Agung Toto Wibowo - 06810035 to Everyone: (10:00 AM)
ah.. ok.. thank you
From Anung Ariwibowo to Everyone: (10:00 AM)
Yes
From FIF - Agung Toto Wibowo - 06810035 to Everyone: (10:00 AM)
yes we can
From Operator CLOVE 10 to Everyone: (10:00 AM)
yes
From Radityo Eko Prasojo (Ridho) to Everyone: (10:01 AM)
Morning everyone
Link to my presentation: https://docs.google.com/presentation/d/18B5-fsKGUZFhUmMS_H6-BdjdiAidaMOv85C5PSUnN7w/edit?usp=sharing
Link to the google colab: https://s.id/oIkDF
From ade romadhony to Everyone: (10:10 AM)
anyone want to give question, please type it here, or type in rocket chat, #practical8 channel
From Rian Adam Rajagede, S.Kom., M.Cs. to Everyone: (10:28 AM)
Q: in your slide no 6, what's the meaning of "Domain"? can you give some examples? what's the different domain and diferent task? I see.. thank you!
From ade romadhony to Everyone: (10:31 AM)
colab link: https://colab.research.google.com/drive/1Csgd2k41jdH2uVoGUxJ04krrlqUxfrH1?usp=sharing#scrollTo=Ffe-7Iz_EREO
From Renny P. Kusumawardani to Everyone: (10:41 AM)
What is the size of your ‘small’ dataset?
From Tisa Siti Saadah to Everyone: (10:42 AM)
could you use 'sampling data' from the whole dataset at pre-training?
From Wawan Cenggoro to Everyone: (10:43 AM)
The colab GPU you meant before is the K80?
From Renny P. Kusumawardani to Everyone: (10:44 AM)
Thank you! Not too large indeed, yet it still takes two days. Sounds rather daunting :)
From Anung Ariwibowo to Everyone: (10:45 AM)
The bigger, the merrier :)
From Mawanda A to Everyone: (10:48 AM)
Where can I get the slide from this session?
From ade romadhony to Everyone: (10:48 AM)
slide: https://docs.google.com/presentation/d/18B5-fsKGUZFhUmMS_H6-BdjdiAidaMOv85C5PSUnN7w/edit#slide=id.g5888218f39_87_20
From Mawanda A to Everyone: (10:48 AM)
Thank you
From Wawan Cenggoro to Everyone: (11:03 AM)
You said that bidirectional context is important. But, I think GPT architectures family use only left to right context, yet they have a good performance. What's your thought?
From Lya Hulliyyatus Suadaa to Everyone: (11:06 AM)
Usually we add [CLS] in the first input, the result will be the same if add the [CLS] in the last input?
From Rian Adam Rajagede, S.Kom., M.Cs. to Everyone: (11:06 AM)
If I understand correctly, since pretrained model is a language model, can we say that the language model we use as a "feature extractor model" and then we add a task-specific layer on top of it? like in computer vision domain
From Wawan Cenggoro to Everyone: (11:08 AM)
Maybe it would be better for Ridho to read the question himself for this question
From Renny P. Kusumawardani to Everyone: (11:08 AM)
When you said 16 GB of RAM, did you mean the VRAM, and was it for the pre-training stage? What are your thoughts on the resources we need for the adaptation stage?
From Fitri Indra Indikawati to Everyone: (11:10 AM)
What do you think about the recent hype about GPT-3 model? Why NLP models are still having a difficulties to understand the semantics of sentences despite having a really big training data, parameters and impressive performance?
From Renny P. Kusumawardani to Everyone: (11:12 AM)
Thank you!
From Fitri Indra Indikawati to Everyone: (11:16 AM)
thank you
From Vishal Gupta to Everyone: (11:17 AM)
Where can I access the code/notebook?
From ade romadhony to Everyone: (11:17 AM)
notebook: https://colab.research.google.com/drive/1Csgd2k41jdH2uVoGUxJ04krrlqUxfrH1?usp=sharing#scrollTo=Ffe-7Iz_EREO
From Lya Hulliyyatus Suadaa to Everyone: (11:30 AM)
Do you have any experience about implement copy mechanism in our fine-tuned model?
From Me to Everyone: (11:31 AM)
do you have experiences about knowledge graph embedding? can you discuss state-of-the-art about this area? thank you very much.
From Ebuka Oguchi to Everyone: (11:35 AM)
Thank you
From Hariyanti Binti Mohd Saleh to Everyone: (11:35 AM)
Thank you for detail explanation
From Robby Hardi to Everyone: (11:35 AM)
Thank you
From Rian Adam Rajagede, S.Kom., M.Cs. to Everyone: (11:35 AM)
thank you!