LIME(Local Interpretable Model- Agnostic Explanations) suffers from lack of robustness due to random sampling
Presented a modification of LIME based on CTGAN sampling and analysed it on a popular adversarial attack that takes into account that perturbed samples obtained using random sampling are out of distribution .
Results show that for around 43% test samples CTGAN-LIME was able to recognize the adversarial attack while Vanilla-LIME wasn't, proving CT-GAN LIME to be more robust than vanilla LIME.
Presented a geometric view of GP-UCB based on posterior mean and variance
Introduced two clustering-based acquisition functions GPUCB_NN & GPUCB^2
Ran GPUCB^2, GPUCB_NN, GPUCB, GPUCB_MCMC, E1, EI_MCMC, PI, PI_MCMC on various standard functions such as Branin-Hoo function, Sixhumpcamel function, Eggholder function and a synthetic function
Reduced the search space of the GP-UCB acquisition function to a single best cluster chosen by these methods
In the process of IC design, lithography can be defined as the process of reprinting the pattern of mask on Silicon wafer. Lithography is an important step in this process as it enables feature size to decrease which further helps in decreasing device size. This continuous decrease in feature size may lead to printability issues and hence hotspots. Presence of hotspots can cause the circuit to fail completely, so it is very important to detect these hotspots with high accuracy. Previously various simulation, machine leaning and deep learning based techniques have been implemented to solve this issue.
In this work, a method to identify hotspots using Vision Transformers is proposed. Along with this, other deep learning techniques such as CNNs and ANNs have also been used for comparison purposes.
All three techniques are implemented on five datasets. ViT gives an overall average accuracy of 98.05% which is 1.39% higher than accuracy of CNNs and 2.04% higher than accuracy given by ANNs. Although the ViTs prove the best in terms of overall accuracy, but at dataset level its performance can be improved. Three out of five datasets have accuracy higher than 99% and for rest two it is slightly above 95%. In future, authors wish to improve accuracy for these two datasets by improving the model and reducing imbalance in the datasets.
In the domain of Natural Language Generation and Processing, a lot of work is being done for text generation. As the machines become able to understand the text and language, it leads to a significant reduction in human involvement. Many sequence models show great work in generating human like text, but the amount of research work done to check the extent up to which their results match the man-made texts are limited in number. In this paper, the text is generated using Long Short Term Memory networks (LSTMs) and Generative Pretrained Transformer-2 (GPT-2). The text by neural language models based on LSTMs and GPT-2 follows Zipf’s law and Heap’s law, two statistical representations followed by every natural language generated text. One of the main findings is about the influence of parameter Temperature on the text produced. The LSTM generated text improves as the value of Temperature increases. The comparison between GPT-2 and LSTM generated text also shows that text generated using GPT-2 is more similar to natural text than that generated by LSTMs.