Challenge Goals

Aims

The red teams will compete to recover locked designs that were protected using ALMOST and UnSAIL locking techniques. Teams will be judged on how many designs they attack, the success of those attacks (based on the criteria below), and the quality of their technical write-up and presentation. 

Qualification designs:
https://github.com/DfX-NYUAD/UNSAIL
https://github.com/NYU-Hardware-Security/Synthesis-Aware-Logic-Locking-Benchmarks 

Evaluation Criteria:---

Your submission will be evaluated based on your approach to find the following assets of the locked designs:

Instructions

Overview:---

This year's challenge focuses on oracle less scenarios and anti-ML attacks locking techniques

You will be provided with a set of benchmarks. For each benchmark you will have:

In the first phase you will also be provided the locking key for each design so that you can self asses your findings. The evaluation in the first phase will be done on the presented approach. In the finals you will be provided a new set of benchmark for each technique without the locking key. The evaluation of the final phase will take into account both the approach and correctness of retrieved keys.

Deliverables:---

References:---

"ALMOST: Adversarial Learning to Mitigate Oracle-less ML Attacks via Synthesis Tuning" Animesh Basak Chowdhury, Lilas Alrahis, Luca Collini, Johann Knechtel, Ramesh Karri, Siddharth Garg, Ozgur Sinanoglu, Benjamin Tan [DAC '23]

"UNSAIL: Thwarting Oracle-Less Machine Learning Attacks on Logic Locking" Lilas Alrahis, Satwik Patnaik, Johann Knechtel, Hani Saleh, Baker Mohammad, Mahmoud Al-Qutayri, Ozgur Sinanoglu [IEEE Transactions on Information Forensics and Security '21]