Challenge Goals
Aims
The red teams will compete to recover locked designs that were protected using ALMOST and UnSAIL locking techniques. Teams will be judged on how many designs they attack, the success of those attacks (based on the criteria below), and the quality of their technical write-up and presentation.
Qualification designs:
https://github.com/DfX-NYUAD/UNSAIL
https://github.com/NYU-Hardware-Security/Synthesis-Aware-Logic-Locking-Benchmarks
Evaluation Criteria:---
Your submission will be evaluated based on your approach to find the following assets of the locked designs:
Finding the unlocking key: points will be given based on accuracy: [0-50%) 0 points, 50%-100% mapped to 0-10 points.
Approach: the evaluation of state of the art techniques is welcome, but novel techniques will gain extra points base novelty and effectiveness.
Instructions
Overview:---
This year's challenge focuses on oracle less scenarios and anti-ML attacks locking techniques.
You will be provided with a set of benchmarks. For each benchmark you will have:
ALMOST locked
UnSail Locked
In the first phase you will also be provided the locking key for each design so that you can self asses your findings. The evaluation in the first phase will be done on the presented approach. In the finals you will be provided a new set of benchmark for each technique without the locking key. The evaluation of the final phase will take into account both the approach and correctness of retrieved keys.
Deliverables:---
Technical Report: To submit your findings you must provide a comprehensive report. The report must include the following technical details:
Detailed assumptions on the considered thread model.
Detailed algorithm/strategy behind your attacking approach. If possible, provide your setup (along with binary, instructions, etc.) to reproduce the results by the blue-team.
Any other assumptions being made.
Your findings for each design, reporting the found key (in the first phase you can also mention its correctness).
Any shortcomings of your approach (in case you are unable to extract what you desired to).
References:---
"ALMOST: Adversarial Learning to Mitigate Oracle-less ML Attacks via Synthesis Tuning" Animesh Basak Chowdhury, Lilas Alrahis, Luca Collini, Johann Knechtel, Ramesh Karri, Siddharth Garg, Ozgur Sinanoglu, Benjamin Tan [DAC '23]
"UNSAIL: Thwarting Oracle-Less Machine Learning Attacks on Logic Locking" Lilas Alrahis, Satwik Patnaik, Johann Knechtel, Hani Saleh, Baker Mohammad, Mahmoud Al-Qutayri, Ozgur Sinanoglu [IEEE Transactions on Information Forensics and Security '21]