गुरुर्ब्रह्मा गुरुर्विष्णु गुरुर्देवो महेश्वरा गुरुर्साक्षात परब्रह्म तस्मै श्री गुरवे नमः !
Q1 of 4outlined_flag
GAN stands for __________
Generative Advertising Network
Generative Adversarial Network check
Generate Adversarial Network
Generation Adversarial Network
Option Generative Adversarial Network is correct
Q2 of 4outlined_flag
GAN can be segmented into ________ sections.
4
3 check
2
1
Option 3 is correct
Q3 of 4outlined_flag
What are the advantages of GAN models?
Semi-supervised learning, when limited data available for classifiers check
Sampling requires Markov chain
Training can be done easily by doing back propagation check
If you provide GAN an image, it will create a new image that is similar to the original. check
Option Semi-supervised learning, when limited data available for classifiers is correct
Option Training can be done easily by doing back propagation is correct
Option If you provide GAN an image, it will create a new image that is similar to the original. is correct
Q4 of 4outlined_flag
Which of the following is not an example of generative models?
Deep graphical models
variational auto encoders
Linear classifier check
Gaussian Mixture Model
Option Linear classifier is correct
Q1 of 3outlined_flag
Identify which of the given problems are generative in nature
Identifying type of fruit
Creating a new music note check
What number (0-9) is represented in the images?
What is a sedan and what is an SUV?
Option Creating a new music note is correct
Q2 of 3outlined_flag
The Discriminator in Generative Adversarial Networks can be
A Neural Network
A classifier Network
Either of the above check
Option Either of the above is correct
Q3 of 3outlined_flag
The generative network in GANs takes the following as input
Real data
Fake data
Random Noise data check
All the above
Option Random Noise data is correct
Q1 of 3outlined_flag
A DCGAN does not use the label information from the dataset
True check
False
Option True is correct
Q2 of 3outlined_flag
Which of the following activations is preferred for hidden layers of a DCGAN?
Sigmoid
Linear
Leaky Relu or Relu check
Tanh
Option Leaky Relu or Relu is correct
Q3 of 3outlined_flag
Pooling is preferred over strided convolutions in the discriminator network of a DCGAN.
True
False check
Option False is correct
Q1 of 4outlined_flag
If we characterize Generator as G(z; theta_y), the following statements are true
p_z(z) represents probability distribution of data
input noise is sampled from domain z check
theta_y represents parameters of generator model check
theta_y represents parameters of discriminator model
Option input noise is sampled from domain z is correct
Option theta_y represents parameters of generator model is correct
Q2 of 4outlined_flag
What distinguishes the CGAN from the GAN?
Discriminator classifies whether an image is real(1) or fake(0).
The Generator and Discriminator both receive some additional conditioning input information. check
The generator learns to generate plausible data. The generated instances become negative training examples for the discriminator.
.The discriminator learns to distinguish the generator's fake data from real data. The discriminator penalizes the generator for producing implausible results.
Option The Generator and Discriminator both receive some additional conditioning input information. is correct
Q3 of 4outlined_flag
The output of discriminator model in AC_GAN is/are
Probability that the provided image is real check
Probability of the image belonging to each known class. check
Fake image
Real Image
Option Probability that the provided image is real is correct
Option Probability of the image belonging to each known class. is correct
Q4 of 4outlined_flag
The Advantages of a WGAN discriminator over the optimal GAN discriminator is ___
The gradients become zero and vanishes.
Divergence is famously asymmetric and possibly infinite
Wasserstein distance has stable gradients throughout. check
Wasserstein Distance is continuous and almost differentiable everywhere check
Option Wasserstein distance has stable gradients throughout. is correct
Option Wasserstein Distance is continuous and almost differentiable everywhere is correct
Q1 of 5outlined_flag
In case of Mode Collapse a GAN
does not generate fake images
generates almost real-like images
has zero discriminator loss
generates images belonging to few of available input classes check
Option generates images belonging to few of available input classes is correct
Q2 of 5outlined_flag
Loss value alone can be used to accurately determine the performance of the GANs
True
False check
Option False is correct
Q3 of 5outlined_flag
Rapid Scene Categorization is
A quantitative measure to evaluate GANs
A qualitative measure to evaluate GANs check
Option A qualitative measure to evaluate GANs is correct
Q4 of 5outlined_flag
Label smoothing is a technique used to
Optimize GAN training check
Normalize input data
Generate fake images
Create Gaussian noise input for GAN
Option Optimize GAN training is correct
Q5 of 5outlined_flag
Semi-supervised Learning GANs
Use labels with 100% of real data
Use labels with 100% of fake date
Do not use labeled data
Use labels with small percentage of input data check
Option Use labels with small percentage of input data is correct
Images may be created using only text using a &blank__ type of GAN. cycle GAN StackGAN InfoGAN Style GAN
ChatGPT said:
ChatGPT
The correct answer is StackGAN.
StackGAN (Stacked Generative Adversarial Networks) is a type of GAN designed specifically for generating high-quality images from text descriptions. It consists of two stages:
Stage-I GAN: Generates a low-resolution image from the text description.
Stage-II GAN: Refines the low-resolution image to produce a high-resolution, photorealistic image that aligns with the text description.
This allows for detailed and realistic image creation directly from text inputs.
4o
You said:
In _________, a model is trained in a competitive manner. Adversarial Generative Network Discriminator
ChatGPT said:
ChatGPT
The correct answer is Adversarial.
In Adversarial training, models are trained in a competitive manner, typically in the context of Generative Adversarial Networks (GANs). In GANs, there are two models:
Generator: Attempts to create data that is similar to the real data.
Discriminator: Tries to distinguish between real data and the data generated by the generator.
These two models are trained simultaneously, with the generator trying to fool the discriminator and the discriminator trying to accurately classify real and generated data. This competitive process is what makes the training "adversarial."
4o
You said:
Generic models, which describe how data is produced using probabilistic models is_________
ChatGPT said:
ChatGPT
The correct answer is Generative models.
Generative models are a class of models that describe how data is produced by modeling the underlying probability distribution of the data. They can generate new data points that are similar to the original data by sampling from this learned distribution.
Generative models can be used in various tasks, including image generation, data synthesis, and more. Examples of generative models include Generative Adversarial Networks (GANs), Variational Autoencoders (VAEs), and certain types of probabilistic graphical models like Hidden Markov Models (HMMs).
4o
You said:
What is the input to the generator network? Noise Real Images Fake Images Both (a) and (b) Both (a) and (c)
ChatGPT said:
ChatGPT
The correct answer is Noise.
In a Generative Adversarial Network (GAN), the input to the generator network is typically a vector of random noise, often sampled from a simple distribution like a Gaussian distribution. The generator then processes this noise to produce a fake image or data sample that resembles the real data.
This noise input allows the generator to explore different variations and generate diverse outputs. The goal is for the generator to create outputs that are so realistic that the discriminator cannot distinguish them from real data.
4o
You said:
In GAN training the generator and the discriminator must achieve ______. ( Choose the most appropriate option.) Maximum Loss Maximum accuracy Nash Equilibrium None of the above
ChatGPT said:
ChatGPT
The correct answer is Nash Equilibrium.
In GAN training, the generator and discriminator are in a competitive relationship, where each is trying to outperform the other. The ideal outcome of this competition is a state known as Nash Equilibrium. At this point, the generator produces data that is indistinguishable from real data, and the discriminator is unable to correctly classify the real and generated data more than 50% of the time (i.e., it becomes a random guesser). This balance is the goal of GAN training.
4o
You said:
How are GAN models trained? n steps for generator model, then 1 step for discriminator model. 1 steps for generator model, then 1 step for discriminator model. n steps for discriminator model, then 1 step for generator model. 1 steps for discriminator model, then n step for generator model.
ChatGPT said:
ChatGPT
The correct approach for training GAN models is:
1 step for generator model, then 1 step for discriminator model.
In GAN training, the process typically involves alternating between updating the generator and the discriminator:
Update the Discriminator: The discriminator is trained to distinguish between real data and the fake data generated by the generator.
Update the Generator: The generator is then trained to produce data that is more likely to be classified as real by the discriminator.
This alternating update process helps both networks improve their performance over time. The training involves a balance where the generator tries to produce more realistic data while the discriminator tries to become better at distinguishing real from fake data.
4o mini
You said:
A data analyst found that the GAN network was generating only 3 of the 20 classes of images that were used to train the GAN. What do you think was the problem? Small size of dataset Mode collapse of the network Use of Gaussian noise input Both (a) and (b) All of the above
ChatGPT said:
ChatGPT
The correct answer is Mode collapse of the network.
Mode collapse (or mode dropping) is a common problem in GAN training where the generator produces only a limited variety of outputs, effectively collapsing to a few modes of the data distribution instead of capturing the full diversity of the data. In this case, if the GAN is generating only 3 out of the 20 classes of images, it is likely that mode collapse is occurring, where the generator fails to cover all classes and focuses only on a subset.
While a small size of dataset can lead to poor performance and limited diversity, and Gaussian noise input is a standard practice for the generator's input, mode collapse specifically refers to the issue where the generator fails to capture the full range of data classes.
4o mini
You said:
When the discriminator is trained, generator weights are frozen. True False
ChatGPT said:
ChatGPT
True.
When the discriminator is being trained in a GAN, the generator's weights are typically frozen. This means that during the discriminator's training step, the generator's parameters are not updated. The discriminator is trained to correctly differentiate between real and generated data without altering the generator.
Similarly, when the generator is trained, the discriminator's weights are frozen. This alternating training process allows each network to improve in its own role: the generator improves in creating more realistic data, and the discriminator improves in distinguishing between real and fake data.
4o mini
You said:
Dr. Raghu intends to classify 1000 knee X-Ray scans to create a model which can predict whether the patient needs knee replacement surgery. Dr. Raghu needs to use A discriminative ML algorithm A generative ML algorithm
ChatGPT said:
ChatGPT
Dr. Raghu should use a discriminative ML algorithm.
Discriminative ML algorithms are used for classification tasks where the goal is to distinguish between different classes based on input features. In Dr. Raghu's case, the objective is to classify knee X-ray scans to predict whether a patient needs knee replacement surgery. This is a classic classification problem where the model needs to learn the boundary between the classes (e.g., needs surgery vs. does not need surgery).
Generative ML algorithms, on the other hand, are used to model the distribution of data and can generate new samples. They are not typically used for straightforward classification tasks where the focus is on making predictions or decisions based on the given data.
4o mini
You said:
Generic models, which describe how data is produced using probabilistic models is_________ Adversarial Generative Networks discriminator
ChatGPT said:
ChatGPT
The correct answer is Generative.
Generative models describe how data is produced by modeling the underlying probability distribution of the data. They are designed to generate new data samples that resemble the training data by learning the distribution from which the data is drawn. Examples include Generative Adversarial Networks (GANs) and Variational Autoencoders (VAEs).
4o mini
You said:
Can a Minmax 2 player game and the training of GAN models be likened? True False
ChatGPT said:
ChatGPT
True.
The training of Generative Adversarial Networks (GANs) can be likened to a two-player zero-sum game, which is similar to a Minimax game. In this analogy:
Generator (Player 1): Tries to generate data that is indistinguishable from real data.
Discriminator (Player 2): Tries to correctly distinguish between real and generated data.
The objective is for the generator to maximize its ability to fool the discriminator, while the discriminator aims to minimize its error in distinguishing real data from fake. This creates a competitive environment akin to a two-player Minimax game, where one player's gain comes at the expense of the other player’s loss. The training process continues until an equilibrium is reached, similar to finding a Nash equilibrium in a game.
4o mini
You said:
What will the discriminator's accuracy be when it reaches the global optimum for GAN models? (select all possible choices) 1 0.5 p_data/(p_g + p_data) None of these
ChatGPT said:
ChatGPT
The correct choice is 0.5.
When a GAN reaches its global optimum (Nash equilibrium), the generator produces data that is indistinguishable from the real data. At this point:
The discriminator cannot reliably tell apart real data from generated data, effectively making its predictions no better than random guessing.
Consequently, the discriminator's accuracy is around 50%, which is equivalent to guessing randomly.
This reflects the state where the generator has successfully learned to produce data that is as convincing as real data, making the discriminator's task of distinguishing between real and fake data as challenging as possible.
4o mini
You said:
Which GAN implementation has been the most widely used and successful? Conditional GAN vanilla GAN Deep convolution GAN Laplacian Pyramid GAN
ChatGPT said:
ChatGPT
The most widely used and successful GAN implementation is Deep Convolutional GAN (DCGAN).
Deep Convolutional GAN (DCGAN) is one of the most influential and successful implementations of GANs. It introduced the use of convolutional neural networks in both the generator and discriminator networks, which significantly improved the quality of generated images and made training more stable. DCGANs are known for their ability to generate high-quality images and have become a standard architecture in the GAN community.
Other implementations like Conditional GAN extend GANs by conditioning on additional information (e.g., class labels) to generate more specific types of data. Vanilla GAN refers to the original GAN framework introduced by Ian Goodfellow, which laid the foundation for many subsequent improvements. Laplacian Pyramid GAN is designed for high-resolution image generation but is less commonly used compared to DCGANs.
4o mini
You said:
Which of the following is not a possible application of GAN? Data clustering Generation of fake images NFT generation Converting low resolution images to high resolution
ChatGPT said:
ChatGPT
The correct answer is Data clustering.
Data clustering is a technique used to group similar data points together based on certain features or criteria. It is typically handled by clustering algorithms like K-means, hierarchical clustering, or DBSCAN, rather than by GANs.
On the other hand:
Generation of fake images: GANs are well-known for generating realistic synthetic images that resemble real data.
NFT generation: GANs can be used to create unique digital art pieces, which can be sold as NFTs (non-fungible tokens).
Converting low-resolution images to high-resolution: GANs, specifically Super-Resolution GANs (SRGANs), are used to enhance the resolution of images and improve their quality.
4o mini
You said:
Goal/s of the generator is/are Maximize classification error for discriminator Minimize classification error for discriminator Minimize log(1- D(G(Z) ) Maximize log( D(G(Z) )
ChatGPT said:
ChatGPT
The correct goals of the generator in GANs are:
Minimize log(1 - D(G(Z)))
Maximize log(D(G(Z)))
In GANs, the generator aims to improve its performance by making the discriminator’s job as difficult as possible. This involves the following goals:
Maximize log(D(G(Z))): This term represents the generator's objective to maximize the probability that the discriminator will classify the generated data as real. In other words, the generator wants the discriminator to believe that the generated data is real.
Minimize log(1 - D(G(Z))): This is equivalent to maximizing log(D(G(Z))), but expressed differently. It represents the generator's goal to minimize the discriminator's confidence that the generated data is fake.
In summary, the generator's objectives are to make the discriminator's job harder by maximizing the probability that the discriminator classifies generated samples as real.
4o mini
You said:
What will the discriminator's accuracy be when it reaches the global optimum for GAN models? (select all possible choices) 1 0.5 p_data/(p_g + p_data) None of these
ChatGPT said:
ChatGPT
The correct choice is 0.5.
When the GAN reaches its global optimum (Nash equilibrium):
The generator is producing data that is indistinguishable from real data.
As a result, the discriminator is unable to distinguish between real and generated data better than random guessing.
At this point, the discriminator's accuracy will be around 50% (0.5), meaning it is making random guesses as to whether the data is real or generated. This reflects the state where the generator has successfully learned to produce data that appears similar to the real data, making the discriminator's task as challenging as possible.
4o mini
You said:
0.5 is wrong
ChatGPT said:
ChatGPT
If 0.5 is not considered correct, the likely correct answer is p_data / (p_g + p_data).
In the context of GANs, if the generator is perfectly balanced and the discriminator is also optimal, the following is true:
The generator produces data that is indistinguishable from real data.
The discriminator's accuracy is not always exactly 50%, but in the ideal case, the discriminator would be uncertain between the generated data and the real data.
The formula p_data / (p_g + p_data) represents the equilibrium point where the discriminator's probability of correctly classifying data is balanced according to the relative probabilities of real and generated data.
If p_data and p_g represent the probabilities of real and generated data respectively, at equilibrium, the discriminator's classification accuracy reflects this balance, and the formula captures the probabilistic balance between the two distributions.