Projects
As machine learning models get more complex, they can outperform traditional algorithms and tackle a wider range of problems, including challenging combinatorial optimization tasks. However, this increased complexity can make understanding how the model makes its decisions difficult. Explainable models can increase trust in the model’s decisions and may even lead to improvements in the algorithm itself. We develop a framework for explaining a model’s decision-making process while taking into account domain knowledge of the problem at hand. Using the NeuroSAT algorithm for SAT solving, we demonstrate how our framework explains the underlying algorithmic concepts that drive the operation of an NN-based model. In addition, we identify a range of parameters where NeuroSAT outperforms all known hand-crafted heuristics, indicating that it may have learned a novel algorithmic principle.
Deep learning has consistently outperformed existing methods across many fields over the last decade. However, we are just beginning to understand the capabilities of neural learning in symbolic domains. Deep learning architectures that employ parameter sharing over graphs can produce models that can be trained on complex properties of relational data. These include highly relevant NP-complete problems, such as SAT, TSP, and graph coloring. We study the way that known state-of-the-art Graph neural networks (GNNs) models solve graph coloring. The idea is to use a framework for explaining a model’s decision-making process while considering domain knowledge of the problem at hand. Our results can contribute to narrowing the gap in our understanding of the algorithms learned by GNNs, as well as finding empirical evidence of their capabilities on hard combinatorial problems. Explainable models can increase trust in the model’s decisions and may even lead to improvements in heuristic algorithms.
Sorting is a fundamental operation in computing. However, the speed of state-of-the-art sorting algorithms on a single thread has reached its limits. Meanwhile, deep learning has demonstrated its potential to provide significant performance improvements in data mining and machine learning tasks. Therefore, it is interesting to explore whether sorting can also be sped up by deep learning techniques.
Detecting sarcasm is a non-trivial task for humans, let alone for automatic methods. Part of the difficulty lies in understanding the context, but another challenge is the many shades of sarcasm. Some remarks are more humorous than others, while others may be more toxic. We show how different datasets labeled with sarcasm reveal different nuances of sarcasm, which explains poor cross-domain performance. Using these insights, we guide a data enrichment procedure that significantly improves cross-domain performance up to an additive 13% in F1 score without requiring additional labeled data.
As a way to reduce traffic loads, especially during rush hours, special lanes for public transportation and carpooling are set up as Fast Lanes (FL). We suggest a flexible FL managed by traffic volume and social priority, which is a preference criterion based on driver characteristics or travel purpose. We simulated a real Junction, where the data was extracted from surveillance cameras positioned at a selected junction, and processed by YOLO and TrafficDataLandBox tools for vehicle detection. Then, we employed a known reinforcement learning algorithm called Q-Learning, to dynamically control the traffic lights according to the social priorities and the current traffic.
People
Havana Rika
Holds a B.Sc. (cum laude) in computer science from Bar-Ilan University and an M.Sc. and Ph.D. in computer science from the Weizmann Institute of Science under the supervision of Prof. Robert Krauthgamer.
Our studies focus primarily on the reasoning behind the decision-making process employed by Artificial Neural Network models in solving combinatorial problems.
Elad Shoham, PhD student, current.
Supervised jointly with Dan Vilenchik.
David Ben-Michael, MSc student, current.
Supervised jointly with Dan Vilenchik.
Eyal Segal, MSc student, current.
Omri Haber, MSc student, current.