Code & Data

To Be Updated

Getting started with Colabs:

General intro: https://colab.research.google.com/notebooks/intro.ipynb

Installing dependencies: https://colab.research.google.com/notebooks/snippets/importing_libraries.ipynb

Loading data: https://colab.research.google.com/notebooks/io.ipynb


Code

Bayesian Optimization & SMAC: https://drive.google.com/drive/folders/1F8MzE-8TBmjJkf-Ffmpet87_qLmgpGyX?usp=sharing

Learning to Search: https://drive.google.com/drive/folders/1dq7UP9mN1Mp2mDcI8Nc2C-isuG6zCIEk?usp=sharing

Large Neighborhood Search: https://drive.google.com/drive/folders/1EbC9mAY2chxAXs8wjZYbE7uyOIz2bM2r?usp=sharing

MAML Colab: https://colab.research.google.com/github/mari-linhares/tensorflow-maml/blob/master/maml.ipynb

MAML Code: https://github.com/cbfinn/maml

MAML in PyTorch: https://github.com/dragen1860/MAML-Pytorch

Learning to Learn: https://github.com/deepmind/learning-to-learn

Learning to Learn Notebook: https://github.com/AdrienLE/learning_by_grad_by_grad_repro/blob/master/Grad%5E2.ipynb

Higher Order Gradients in PyTorch: https://github.com/facebookresearch/higher


Bayesian Optimization:

Bayesian optimization with PyTorch: https://botorch.org/

GP with PyTorch: https://gpytorch.ai/

GP with Tensorflow: https://github.com/GPflow/GPflow

Gpy: https://sheffieldml.github.io/GPy/

Hyperband: https://github.com/zygmuntz/hyperband


Algorithm Configuration:

SMAC: https://github.com/automl/SMAC3

AutoML: https://github.com/automl

SATzilla: http://www.cs.ubc.ca/labs/beta/Projects/SATzilla/

Hydra: http://www.cs.ubc.ca/labs/beta/Projects/Hydra/

SATenstein: http://www.cs.ubc.ca/labs/beta/Projects/SATenstein/


Learning to Optimize:

Learning to learn: https://github.com/deepmind/learning-to-learn

Learning loss schedules: https://github.com/safpla/AutoLossRelease


Learning for SAT:

NeuroSAT: https://github.com/dselsam/neurosat

NeuroCore: https://github.com/dselsam/neurocore-public


Learning to Search for Integer Programs:

Retrospective imitation: https://github.com/onionymous/imitation-MILP-2

Co-training: https://github.com/ravi-lanka-4/CoPiEr

Learning to branch with graph neural networks: https://github.com/ds4dm/learn2branch


Learning for Combinatorial Optimization:

Learning combinatorial optimization over graphs: https://github.com/Hanjun-Dai/graph_comb_opt

Clusternet: https://github.com/bwilder0/clusternet

Attention learning for TSP: https://github.com/wouterkool/attention-learn-to-route

Combinatorial optimization with GNNs and Guided Tree Search: https://github.com/intel-isl/NPHard


Theorem Proving:

Deepmath: https://github.com/tensorflow/deepmath

Gamepad: https://github.com/ml4tp/gamepad

CoqGym: https://github.com/princeton-vl/CoqGym


Differentiable Optimization:

JAX: https://github.com/google/jax

CVXPyLayers: https://github.com/cvxgrp/cvxpylayers


Data

Meta-Dataset: https://github.com/google-research/meta-dataset

Omniglot: https://github.com/brendenlake/omniglot/

Test functions for optimization: https://www.sfu.ca/~ssurjano/optimization.html