https://github.com/NervanaSystems/neon
Neon is Intel Nervana's reference deep learning framework committed to best performance on all hardware. Designed for ease-of-use and extensibility.
Support for commonly used layers: convolution, RNN, LSTM, GRU, BatchNorm, and more.
Install:
git clone https://github.com/NervanaSystems/neon.git
cd neon
make
. .venv/bin/activate
or
pip install nervananeon
Also,
https://github.com/NervanaSystems/ngraph
Nervana Graph is Nervana's library for developing frameworks that can efficiently run deep learning computations on a variety of compute platforms. It consists of three primary API components:
Installation: https://ngraph.nervanasys.com/docs/latest/installation.html
https://github.com/01org/mkl-dnn
and multinode
Assuming we have mpi setup.
make multinode_prepare
To install Nervana Graph:
make install
To add GPU support:
make gpu_prepare
To uninstall Nervana Graph:
make uninstall
To run the tests:
make [test_cpu|test_mkldnn|test_gpu|test_integration]
Before checking in code, ensure no "make style" errors:
=====
Create graph
https://ngraph.nervanasys.com/docs/latest/building_graphs.html
>>> x = ng.constant(0)
>>> y = ng.constant(1)
>>> mysum = ng.add(x, y)
>>> type(mysum)
ngraph.op_graph.op_graph.AddOp
>>> x = ng.placeholder((), initial_value=0)
>>> a = ng.assign(x, 5)
>>> z = x + 1
import ngraph as ng
ax_C = ng.make_axis(length=4, name='C')
ax_W = ng.make_axis(length=2, name='W')
ax_H = ng.make_axis(length=2, name='H')
ax_N = ng.make_axis(length=128, name='N')
#creates an AssignableTensorOp th
x = ng.placeholder((ax_C, ax_W, ax_H, ax_N))
It is important to remember that x is a Python variable that holds an Op. Therefore, the following code
x = x + x
does not directly double the value of the tensor in the placeholder. Instead, the __add__ method is called with both arguments pointing to the same placeholder object. This returns a new Op that is now stored as the python variable x.
Consider the following example:
x1 = x + x
y = x1 * x1 - x
The intermediate value x + x is only computed once, since the same Op is used for both arguments of the multiplication in y.
Read more about Axes, and autodiff...
Then, after create graph...
Transformers are used to convert the Op graph into a backend-specific executable format. Once the graph has been defined, one or more computations are created using a transformer. Computations are handles to executable objects created by the transformer, which can be called to evaluate a subset of the entire graph.
Two kinds of backend:
-CPUs (via NumPy)
-NVIDIA* GPUs (via PyCUDA)
mport ngraph as ng
a = ng.constant(4)
b = ng.placeholder(())
c = ng.placeholder(())
d = ng.multiply(a, b)
e = ng.add(d, c)
example_comp = transformer.computation(e, b, c)
result_e = example_comp(2, 7)
It also can be a front end support for TensorFlow and Caffe.