गुरुर्ब्रह्मा गुरुर्विष्णु गुरुर्देवो महेश्वरा गुरुर्साक्षात परब्रह्म तस्मै श्री गुरवे नमः !
more_vert
fullscreen
read_more
arrow_back_ios
arrow_forward_ios
menu
alarm
14m 33s
(Submitted)
Q1 of 4outlined_flag
Which of the following sections in OpenAI helps in experimenting and iterating with different custom use cases?
Playground check
Resources
Examples
Documentation
Option Playground is correct
Explanation :
Yes!! You got the right answer!
Q2 of 4outlined_flag
OpenAI playground supports users to experiment with various preset comprising of predefined training text and respective settings.
True check
False
Option True is correct
Explanation :
Yes!! You got the right answer!
Q3 of 4outlined_flag
Users can interact with the OpenAI API’s only using the Playgrounds
True
False check
Option False is correct
Explanation :
Yes!! You got the right answer!
Q4 of 4outlined_flag
Which of the following are the basic constructs of OpenAI API?
prompt
completion
tokens
All the given options check
Option All the given options is correct
Explanation :
Yes!! You got the right answer!
more_vert
fullscreen
read_more
arrow_back_ios
arrow_forward_ios
menu
alarm
7m
(Submitted)
Q1 of 3outlined_flag
The most capable engine (GPT-3 model) of all different flavors of the GPT-3 model provided in openAI API is:
ada
babbage
curie
davinci check
Option davinci is correct
Explanation :
Yes!! You got the right answer!
Q2 of 3outlined_flag
Lower values for temperature parameter in the parameter settings in GPT-3 API makes the model
deterministic check
creative
Random
None of the above
Option deterministic is correct
Explanation :
Yes!! You got the right answer!
Q3 of 3outlined_flag
The parameter used to control the response length in GPT-3 API from openAI is:
temperature
max_tokens check
frequency_penalty
logprobs
Option max_tokens is correct
Explanation :
Yes!! You got the right answer!
more_vert
fullscreen
read_more
arrow_back_ios
arrow_forward_ios
menu
alarm
6m 59s
(Submitted)
Q1 of 3outlined_flag
Which of following techniques is an efficient way to encode words in the language model?
Label encoding
Bag of words
TFIDF vectorizer
Word2Vec check
Option Word2Vec is correct
Explanation :
Yes!! You got the correct answer!
Q2 of 3outlined_flag
Shashi wants to translate an English sentence to French and she has some training data for English French translation.
Which of the following techniques she can use to perform this ML task?
RNNs
GRUs
LSTMs
Transformers
All the above check
Option All the above is correct
Explanation :
Yes!! You got the correct answer!
Q3 of 3outlined_flag
What are the advantages of using transfer learning technique in model building for an NLP task?
Much less data required
Reduces the training time
Speeds up the model building process
All the above check
Option All the above is correct
Explanation :
Yes!! You got the correct answer!
more_vert
fullscreen
read_more
arrow_back_ios
arrow_forward_ios
menu
alarm
14m 2s
(Submitted)
Q1 of 6outlined_flag
What should be the characteristics of a numerical representation of textual data ?
It should be semantically meaningful
Should provide informative input representation
Should have a massive impact on overall model performance
All of the above check
Option All of the above is correct
Explanation :
All the three options are the characteristics of numerical representation of textual data
Q2 of 6outlined_flag
Why don’t we use one hot encoding to represent words as vectors ?
With every increase of ‘n’ in the vocabulary, the number of features will also increase by ‘n’. check
With every increase of ‘n’ in the vocabulary, the number of features will also increase by ‘n/2’.
It is not possible to create vectors for more than 10,000 words.
None of the above
Option With every increase of ‘n’ in the vocabulary, the number of features will also increase by ‘n’. is correct
Explanation :
Correct!
Q3 of 6outlined_flag
State True or False-
We can think of word embedding algorithms like word2vec as unsupervised feature extractors for words.
True check
False
Option True is correct
Explanation :
Correct!
Q4 of 6outlined_flag
State true or false-
RNNs (Recurrent neural networks) do not parse the sentences word by word in a sequential manner.
True
False check
Option False is correct
Explanation :
Correct!
Q5 of 6outlined_flag
Fill in the blanks-
For every odd index on the input vector, we create a vector using the ________ function. For every even index, create a vector using the ______ function.
sine, cosine
cosine, sine check
cosine, tan
sigmoid, relu
Option cosine, sine is correct
Explanation :
Correct!
Q6 of 6outlined_flag
Which of the following is the core idea behind word embeddings ?
words that used in different contexts will have similar representations.
words that used in similar contexts will have similar representations. check
words that used in similar contexts will have different representations.
words that used in different contexts will have different representations.
Option words that used in similar contexts will have similar representations. is correct
Explanation :
Correct!
more_vert
fullscreen
read_more
arrow_back_ios
arrow_forward_ios
menu
alarm
6m 49s
(Submitted)
Q1 of 4outlined_flag
Which of following machine learning technique can be trained faster with less amount of training data?
N-Grams approach
Sequence-to-sequence modeling using LSTM
Transformer Architecture check
Word Sense disambiguation
Option Transformer Architecture is correct
Explanation :
Yes!! You got the correct answer!
Q2 of 4outlined_flag
What do you mean by Transformation based learning?
The technique in which rules are manually induced from data
The technique in which rules are automatically induced from data check
The technique in which rules are transformed into another data
The technique in which rules are not used.
Option The technique in which rules are automatically induced from data is correct
Explanation :
Yes!! You got the correct answer!
Q3 of 4outlined_flag
What is most the important concepts which makes the transformer model most powerful?
Encoder-decoder concept
Self-attention check
Artificial Neural Network
All of the above
Option Self-attention is correct
Explanation :
Yes!! You got the correct answer!
Q4 of 4outlined_flag
Which type of attention is used to reduce the information leak while training a model?
Encoder-decoder attention
Self-attention in input sequence
Self-attention in output sequence check
Both a and b
Option Self-attention in output sequence is correct
Explanation :
Yes!! You got the correct answer!
Problem Statement
Problem: Generating SQL commands from English text
Steps:
· Load the key stored in json file
· Create an instance of gpt model
· Add few examples to make gpt-3 understand what we are trying to accomplish.
· Test whether gpt-3 can generate SQL commands for a given English text.
import openai
import json
# Load the API key from the JSON file
with open('GPT-3-SECRET-KEY.json') as f:
key = json.load(f)
openai.api_key = key["OPEN_API_KEY"]
class GPTSQL:
def __init__(self, engine="davinci", temperature=0.5, max_tokens=100):
self.engine = engine
self.temperature = temperature
self.max_tokens = max_tokens
self.examples = []
def add_example(self, example):
self.examples.append(example)
def generate_sql(self, prompt):
# Combine examples into a formatted string
examples_str = "\n".join(
f"###\n{example['english']}\n{example['sql']}" for example in self.examples
)
# Construct the final prompt with examples and the new query
full_prompt = f"{examples_str}\n###\n{prompt}\n"
response = openai.Completion.create(
engine=self.engine,
prompt=full_prompt,
temperature=self.temperature,
max_tokens=self.max_tokens,
top_p=1,
frequency_penalty=0,
presence_penalty=0,
stop=["###"]
)
# Extract the generated SQL command
return response.choices[0].text.strip()
# Example data class (for readability, structuring)
class Example:
def __init__(self, english, sql):
self.english = english
self.sql = sql
def to_dict(self):
return {
"english": self.english,
"sql": self.sql
}
# Initialize the GPTSQL object
gpt_sql = GPTSQL(engine="davinci", temperature=0.5, max_tokens=100)
# Add examples to the model
gpt_sql.add_example(Example(
'Find unique values of DEPARTMENT from Employee table.',
'SELECT DISTINCT DEPARTMENT FROM Employee;'
).to_dict())
gpt_sql.add_example(Example(
"Find the count of employees in department ECE.",
"SELECT COUNT(*) FROM Employee WHERE DEPARTMENT = 'ECE';"
).to_dict())
# Function to test the model with a new English query
def test_query(query):
sql_command = gpt_sql.generate_sql(query)
print(f"English Query: {query}")
print(f"Generated SQL: {sql_command}")
# Testing the model with different queries
test_query("What is the total number of employees?")
test_query("List all the departments in the company.")
test_query("Show the names of all employees in the HR department.")
more_vert
fullscreen
read_more
arrow_back_ios
arrow_forward_ios
Problem Statement
Steps:
· Load the key stored in json file
· Input the passage to the prompt and provide some instructions to gpt-3 in natural language.
· prompt="Someone asked me to extract information from this passage:\n\"\"\"\nGenerative Pre-trained Transformer 3 (GPT-3) is an autoregressive language model that uses deep learning to produce human-like text. It is the third-generation language prediction model in the GPT-n series (and the successor to GPT-2) created by OpenAI, a San Francisco-based artificial intelligence research laboratory.[2] GPT-3's full version has a capacity of 175 billion machine learning parameters. GPT-3, which was introduced in May 2020, and was in beta testing as of July 2020,[3] is part of a trend in natural language processing (NLP) systems of pre-trained language representations.[1] Before the release of GPT-3, the largest language model was Microsoft's Turing NLG, introduced in February 2020, with a capacity of 17 billion parameters or less than a tenth of GPT-3s.[4]\n\"\"\"\nI noted down few points after reading the passage:\n\"\"\"\n"
Test whether gpt-3 can extract information from a given passage.
import openai
import json
# Load the API key from the JSON file
with open('GPT-3-SECRET-KEY.json') as f:
key = json.load(f)
openai.api_key = key["OPEN_API_KEY"]
# Define the passage
passage = """
Generative Pre-trained Transformer 3 (GPT-3) is an autoregressive language model that uses deep learning to produce human-like text. It is the third-generation language prediction model in the GPT-n series (and the successor to GPT-2) created by OpenAI, a San Francisco-based artificial intelligence research laboratory.[2] GPT-3's full version has a capacity of 175 billion machine learning parameters. GPT-3, which was introduced in May 2020, and was in beta testing as of July 2020,[3] is part of a trend in natural language processing (NLP) systems of pre-trained language representations.[1] Before the release of GPT-3, the largest language model was Microsoft's Turing NLG, introduced in February 2020, with a capacity of 17 billion parameters or less than a tenth of GPT-3s.[4]
"""
# Define the prompt with instructions
prompt = f"Someone asked me to extract information from this passage:\n\"\"\"\n{passage}\n\"\"\"\nI noted down few points after reading the passage:\n\"\"\"\n"
# Make the API call
response = openai.Completion.create(
engine="davinci",
prompt=prompt,
temperature=0.3,
max_tokens=150,
top_p=1.0,
frequency_penalty=0.0,
presence_penalty=0.0,
stop=["\"\"\""]
)
# Extract and print the response
extracted_info = response.choices[0].text.strip()
print("Extracted Information:")
print(extracted_info)
more_vert
fullscreen
read_more
arrow_back_ios
arrow_forward_ios
Problem Statement
Problem: Create a recipe from list of ingredients.
Steps:
1. Import the necessary packages (openai, json).
2. Store your private API key in a JSON file and load it in ‘openai.api_key’.
3. Inside the prompt type we have to provide the list of ingredients list on which recipe will be generated.
Here, for instance, we have taken the example of the Caramel Custard.
4. To view the result of your prompt, you need to drill down to the “text” key.
import openai
import json
# Import and load the secret key from a JSON file
with open('GPT-3-SECRET-KEY.json') as f:
key = json.load(f)
openai.api_key = key["OPEN_API_KEY"]
# Define the ingredients and the prompt
ingredients_list = """
Caramel Custard
Ingredients:
- Milk
- Custard Powder
- Sugar
- Milkmaid
Steps to be followed:
"""
# Make the API call to generate the recipe
response = openai.Completion.create(
engine="davinci-instruct-beta",
prompt=f"Write a recipe based on these ingredients and instructions:\n{ingredients_list}",
temperature=0,
max_tokens=150,
top_p=1.0,
frequency_penalty=0.5,
presence_penalty=0.5
)
# Print the generated recipe
print("Generated Recipe:")
print(response['choices'][0]["text"].strip())
more_vert
fullscreen
read_more
arrow_back_ios
arrow_forward_ios
Problem Statement
Steps:
1. Import the necessary packages (openai, json).
2. Store your private API key in a JSON file and load it in ‘openai.api_key’.
3. Inside the prompt type in an essay topic on which you want your outline to be generated.
Here, for instance, we have taken the example of the American legendary Metal band Metallica.
4. To view the result of your prompt, you need to drill down to the “text” key.
import openai
import json
# Load the API key from a JSON file
with open('secret_key.json') as f:
key = json.load(f)
openai.api_key = key["secret_key"]
# Define the essay topic and prompt
prompt = "Create an outline for an essay about Metallica and their contribution to metal music:\n\nI: Introduction"
# Make the API call to generate the essay outline
response = openai.Completion.create(
engine="davinci",
prompt=prompt,
temperature=0.7,
max_tokens=150,
top_p=1.0,
frequency_penalty=0.0,
presence_penalty=0.0
)
# Print the generated outline
print("Generated Essay Outline:")
print(response['choices'][0]["text"].strip())
more_vert
fullscreen
read_more
arrow_back_ios
arrow_forward_ios
Problem Statement
Turn a product description into ad copy
Steps:
Import the necessary packages (openai, json).
Store your private API key in a JSON file and load it in ‘openai.api_key’.
Inside the prompt, type in the description of the product you want to generate an advertisement for
Here, for instance, we have taken the example of an offroad motorbike XR350. We will let the system design an advertisement for this bike.
To view the result of your prompt, you need to drill down to the “text” key.
import openai
import json
# Load the API key from a JSON file
with open(r'secret_key.json') as f:
key = json.load(f)
openai.api_key = key["secret_key"]
# Define the product description and the prompt
product_description = """
XR350 motorcycle for offroad and mountain riders. High performance 349cc engine. Available in 3 color segments. Customization available.
"""
prompt = f"""Write a creative ad for the following product to run on Instagram:
\"\"\"\"\"\"
{product_description}
\"\"\"\"\"\"
This is the ad I wrote for Instagram aimed at motorbiking lovers:
\"\"\"\"\"\"
"""
# Make the API call to generate the ad copy
response = openai.Completion.create(
engine="davinci-instruct-beta",
prompt=prompt,
temperature=0.8,
max_tokens=150,
top_p=1.0,
frequency_penalty=0.0,
presence_penalty=0.0,
stop=["\"\"\"\"\"\""]
)
# Print the generated ad copy
print("Generated Ad Copy:")
print(response['choices'][0]["text"].strip())
more_vert
fullscreen
read_more
arrow_back_ios
arrow_forward_ios
Problem Statement
Steps:
1. Import the necessary packages (openai, json).
2. Store your private API key in a JSON file and load it in ‘openai.api_key’.
3. Inside the prompt type in an essay topic on which you want your outline to be generated.
4. To view the result of your prompt, you need to drill down to the “text” key.
import openai
import json
# Load the API key from a JSON file
with open('secret_key.json') as f:
key = json.load(f)
openai.api_key = key["secret_key"]
# Define the prompt for generating interview questions
prompt = "Create a list of questions for my interview with a data scientist:\n\n1."
# Make the API call to generate the list of interview questions
response = openai.Completion.create(
engine="davinci-instruct-beta",
prompt=prompt,
temperature=0.8,
max_tokens=200,
top_p=1.0,
frequency_penalty=0.0,
presence_penalty=0.0,
stop=["\n\n"]
)
# Print the generated questions
print("Generated Interview Questions:")
print(response['choices'][0]["text"].strip())
The range of values logit_bias parameter can take is given by: -1 to +1 -100 to +100 -5 to +5 None of the above Reset
ChatGPT said:
ChatGPT
The correct answer is -100 to +100. The logit_bias parameter can take values in this range, where higher values significantly increase or decrease the probability of specific tokens being generated. If you need more information about how this parameter works, feel free to ask!
4o
You said:
How many tokens are currently supported by Open Al for the combined text prompt and generated completion? 1024 2048 4096 Unlimited Reset
ChatGPT said:
ChatGPT
The current maximum number of tokens supported by OpenAI for the combined text prompt and generated completion is 4096 for models like GPT-3.5 Turbo. Some models like GPT-4 have larger limits, up to 8192 or 32,768 tokens, depending on the model variant.
4o
You said:
Choose the correct order for steps for positional encoding- 1. For every even index, create a vector using the sin function. 2. For every odd index on the input vector, create a vector using the cos function. 3. Add these vectors to their corresponding input embeddings. Ο 1,23 2,1,3 3.1.2 1,3,2 Reset
ChatGPT said:
ChatGPT
The correct order for steps for positional encoding is:
1, 2, 3
1: For every even index, create a vector using the sin function.
2: For every odd index on the input vector, create a vector using the cos function.
3: Add these vectors to their corresponding input embeddings.
Therefore, the answer is 1, 2, 3.
4o
You said:
What characteristics should this encoding have? Questio it should output a unique encoding for each time-step (word's position in a sentence) Distance between any two time-steps should be consistent across sentences with different lengths. Our model should generalize to longer sentences without any efforts. Its values should be bounded. it must be deterministic All of the above Reset
ChatGPT said:
ChatGPT
The correct answer is All of the above. Positional encoding in models like the Transformer should have the following characteristics:
It should output a unique encoding for each time-step (word's position in a sentence).
The distance between any two time-steps should be consistent across sentences with different lengths.
Our model should generalize to longer sentences without any efforts. Its values should be bounded.
It must be deterministic.
These characteristics ensure that the model effectively captures the relative positions of tokens in the input sequence and can generalize well to sequences of varying lengths.
4o
You said:
State True/False In Adidases to the playground Oberal supports users to teract with the AP through HTTP requisits from different programming languages like Python, CN Crystal Dart Go, μια μενα επpuNode, Ruby and Unity True Fate Save
ChatGPT said:
ChatGPT
The statement is True. OpenAI supports users to interact with the API through HTTP requests from different programming languages like Python, C#, Crystal, Dart, Go, Java, Node, Ruby, and Unity, among others. This allows developers to integrate AI capabilities into their applications using various programming languages and frameworks.
4o
You said:
What is the use of softmax function in decoder part of model? It helps to map the sequences of different length to each other It helps to compute the context vector. It helps to find out the highest probability word from the vocabulary All of the above Reset Jave
ChatGPT said:
ChatGPT
The correct answer is:
It helps to find out the highest probability word from the vocabulary.
In the decoder part of the model, the softmax function is used to convert the output logits into a probability distribution over the vocabulary. This allows the model to select the word with the highest probability as the next word in the sequence.
4o
You said:
Complete the following sentence - In positional encoding, 8 12 Owe enhance the model's input to inject the order of words. words that are used in similar ways will be placed close together. it is not possible to create positions for more than 10,000 words. None of the above Reset Save
ChatGPT said:
ChatGPT
The best completion for the sentence is:
Owe enhance the model's input to inject the order of words.
In positional encoding, the technique is used to provide information about the relative or absolute position of words in a sequence, thereby enhancing the model's ability to understand the order of words.
4o
You said:
Which of the following characteristics of Transformers which makes them more efficient compared to other Seq2Seq techniques are: a. Attention mechanism b. Parallel architecture c. Long short memory d. Transfer learng Only a Both a and b Oabd All of the above Reset
ChatGPT said:
ChatGPT
The correct answer is Both a and b.
Explanation:
a. Attention mechanism: Allows the model to focus on specific parts of the input sequence, improving efficiency and effectiveness in handling longer sequences.
b. Parallel architecture: Enables the processing of sequences in parallel rather than sequentially, making Transformers more computationally efficient.
c. Long short memory: This is related to LSTM networks, not Transformers.
d. Transfer learning: While Transformers can benefit from transfer learning, this is a general concept applicable to various types of models, not a specific characteristic of Transformers that makes them more efficient compared to other Seq2Seq techniques.
Therefore, the correct characteristics are a and b.
4o
You said:
Which of the following is the official programming language suppo by OpenAI? O Ruby Python O Java Unity Reset
ChatGPT said:
ChatGPT
The official programming language supported by OpenAI is Python.
While OpenAI's API can be accessed using many different programming languages, their official SDKs, libraries, and most documentation are primarily written and provided in Python.
4o
You said:
What are the attention vectors used by self-attention layer in each decoder? 8 12 Key Vector Query Vector Value Vector All of the above Reset
ChatGPT said:
ChatGPT
The correct answer is All of the above.
Explanation:
In the self-attention layer of each decoder in the Transformer model, the following attention vectors are used:
Key Vector: Represents the features of each input position and is used to match with the Query vectors.
Query Vector: Represents the features of each position where attention is being computed, used to query the Key vectors.
Value Vector: Provides the actual information to be attended to, associated with each Key vector.
These three vectors are critical components of the self-attention mechanism in Transformer models.
4o
You said:
The benefit of using sin and cos functions for positional encoding is - by controlling the frequency parameters of these functions, we can manipulate them between [0,1] according to our requirements manipulate them between [0,180] according to our requirements None of the above by controlling the frequency parameters of these functions, we can by controlling the frequency parameters of these functions, we can manipulate them between [-1,1] according to our requirements 11 12 14 15 Reset
ChatGPT said:
ChatGPT
The correct answer is:
by controlling the frequency parameters of these functions, we can manipulate them between [-1,1] according to our requirements.
In positional encoding, sine and cosine functions are used with varying frequencies to generate unique and bounded positional encodings for each position in the sequence. These functions produce values within the range of [−1,1][-1, 1][−1,1], which helps in distinguishing positions while ensuring the values remain within this range.
4o mini
You said:
Which parameter helps in modifying the likelihood of a specific token in the completion text generated by GPT-3 API? logit bias Omax tokens Ologprobs Otemperature Reset
ChatGPT said:
ChatGPT
The parameter that helps in modifying the likelihood of a specific token in the completion text generated by the GPT-3 API is logit bias.
Explanation:
Logit Bias: Allows you to adjust the probability of specific tokens by applying a bias to their logits, effectively increasing or decreasing their likelihood of being generated.
Max Tokens: Controls the maximum length of the generated text but does not directly modify token likelihood.
Logprobs: Provides the log probabilities of tokens but does not modify them.
Temperature: Controls the randomness of the output by scaling the logits, affecting the probability distribution but not specific token biases directly.
4o mini
You said:
Complete the following sentence with appropriate option-Within the high-dimensional semantic space, Question 13 words that are used in similar ways will be placed close together. the vectors/points representing similar words will cluster together. Othe distance between vector representation of similar words will be low. Any of the above can be used. Reset
ChatGPT said:
ChatGPT
The appropriate completion for the sentence is:
Any of the above can be used.
In a high-dimensional semantic space, the following statements are true:
Words that are used in similar ways will be placed close together.
The vectors/points representing similar words will cluster together.
The distance between vector representations of similar words will be low.
All these statements describe aspects of how semantic similarity is represented in high-dimensional spaces used in word embeddings and similar models.
4o mini
You said:
With the help of sinusoidal function in positional encoding, our model can attend to easily. End positions Relative Positions Top positions None of the above 12 Reset
ChatGPT said:
ChatGPT
The correct answer is:
Relative Positions
Sinusoidal functions in positional encoding help the model to attend to relative positions in the sequence, as they provide unique and continuous representations of each position that allow the model to infer the relative distance between different positions.
4o mini
You said:
Transformer model pays more attention to the most relevant and important word in the sentence. 12 True False Reset
ChatGPT said:
ChatGPT
True
The Transformer model uses the attention mechanism to focus on the most relevant and important words in the sentence. This allows the model to weigh different parts of the input sequence differently based on their relevance, improving its ability to capture contextual relationships and dependencies.
4o mini