गुरुर्ब्रह्मा गुरुर्विष्णु गुरुर्देवो महेश्वरा गुरुर्साक्षात परब्रह्म तस्मै श्री गुरवे नमः !
Introduction to OpenAI GPT Models
more_vert
fullscreen
read_more
arrow_back_ios
arrow_forward_ios
menu
alarm
14m 28s
(Submitted)
Q1 of 4outlined_flag
Identify which among below divides the data space into individual classes by learning from their boundaries
Generative models
Discriminative models check
Generalized models
Polynomial models
Option Discriminative models is correct
Q2 of 4outlined_flag
Identify the process that deciphers and analyzes spoken and written language.
Natural Language Understanding check
Natural Language Generation
Natural Language Invention
Natural Language Development
Option Natural Language Understanding is correct
Q3 of 4outlined_flag
What is the type of Input data processed by GPT models?
Image data
Video data
Text data
Text and Image data check
Option Text and Image data is correct
Q4 of 4outlined_flag
State True/False.
GPT Models can be used to know the real time news.
True
False check
Option False is correct
more_vert
fullscreen
read_more
arrow_back_ios
arrow_forward_ios
menu
alarm
13m 42s
(Submitted)
Q1 of 4outlined_flag
Identify some of the pre-trained image models from below list.
AlexNet check
GoogLeNet check
BERT
BLOOM
Option AlexNet is correct
Option GoogLeNet is correct
Q2 of 4outlined_flag
State True/False from below statement:
GPT models can be used to create classification labels based on what it already understands about the input text being classified.
True check
False
Option True is correct
Q3 of 4outlined_flag
Select all the TRUE statements from below options about Transformers.
They cannot solve sequence-to-sequence tasks.
The complexity of the model is based on the number of parameters it is trained on. check
They use attention mechanism that allows training data with larger datasets. check
They have Encoder and Decoder components in their architecture. check
Option The complexity of the model is based on the number of parameters it is trained on. is correct
Option They use attention mechanism that allows training data with larger datasets. is correct
Option They have Encoder and Decoder components in their architecture. is correct
Q4 of 4outlined_flag
Identify the activities for which Transfer learning technique is used for.
Feature Extraction check
Model Deployment
Fine Tuning check
Data Creation
Option Feature Extraction is correct
Option Fine Tuning is correct
Q1 of 10
Identify all the limitations of GPT Models?
They are expressive.
They may give incorrect answers sometimes.
They lack empathy.
Their inability to understand complex contexts.
Q2 of 10
State True/False: GPT Open AI models can be used for automated code generation, code completion tasks.
true
false
Q3 of 10
Which OpenAI GPT model is specifically designed for image generation and manipulation tasks, in addition to text generation?
GPT-4 Turbo
DALL-E
Moderation
Embeddings
Q4 of 10
Which of the following language task can be perfomed by GPT?
Summarization
Translation
Writing article
Text classification
Q5 of 10
Which of the following are the examples of pretrained model?
Google's BERT
Facebook's RoBERTa
OpenAI's GPT
Microsoft's Turing-NLG
Q6 of 10
In the context of GPT models, what does "few-shot learning" refer to?
Training the model on a large dataset for multiple epochs
Training the model on a very small dataset
Fine-tuning the model on a specific task
Pretraining the model on a wide range of tasks
Q7 of 10
State True/False: GPT pretrained model can be used to predict only one next word for a given output sentence.
true
false
Q8 of 10
GPT is a neural network based deep learning model named …................
Transformers
Convolution Neural Network
Recurrent Neural Network
Transfer
Q9 of 10
Which of the following basic constructs of GPT OpenAI relates to user input?
Prompt
Completion
Tokens
Response
Q10 of 10
State True/False: GPT uses transformer model based on attention mechanism to generate the output text sequence.
true
false