Apply Gaussian filtering to the independent L*a*b planes using Kernel.
Removing Average pixel values from blurred Image using L2 Normalization.
Compute Precision and Recall using Adaptive Thresholding maps.
Deep Learning - Unsupervised Learning
Transform Saliency maps into Square images with fixed sizes in case of this project 64*64.
Transform 2D arrays of images into single arrays using Ravel() function.
Create function for Normalizing and De-Normalizing Images.
Decide Encoder Dimension. [256, 128, 64]
Define an encoding function to create a network which will perform a series of multiplications on flattened array, followed by adding a bias, and wrap all of this in a non-linearity.
This creates a series of multiplications in the graph which transforms the input of batch size times number of features(64*64) which started as None x 4096, to None x 64.
Define a Cost Function using squared difference of mean values for input image and computed image.
Define a learning rate along with an optimizer for improving the model results.
Define a for loop to train batch of images using encoding function and computing cost.
Feed the cost to optimizer along with learning rate to improve the efficiency.
Plot the reconstructed image for every epoch of training.
Train till loss converges record the epoch and loss.
Fix the epoch as the range for training the data to create a Model.
Deep Learning - Supervised Learning
Transform Saliency maps into Square images with fixed sizes in case of this project 64*64.
Transform 2D arrays of images into single arrays using Ravel() function.
Define a Sequential Model.
The model is attributed with 3 activation layers including (RelU, Softmax and RelU) with (4096, 2048, 4096) nodes respectively.
For optimization I have used "Adam" optimizer along with a loss function of "Binary Cross Entropy".
Metrics for measuring efficiency is "Accuracy".
Future Scope
The result for Supervised learning can be improved by fine-tuning the model.
Deep Learning models require enormous set of data to improve the results.
If large data set is available feature learning is better.
TensorFlow - GPU can be used to improve the accuracy and time for model training.