Chainer is a python-based DL framework aiming at flexibility [Chainer] [Tokui 2015]. It provides automatic differentiation APIs based on the define-by-run approach i.e. dynamic computational graphs as well as object-oriented high-level APIs to build and train NNs.
The difference from other famous DL framework like Tensorflow or Caffe is that Chainer constructs NN dynamically. It also supports CUDA/cuDNN using CuPy (motivation NumPy + CUDA = CuPy) for high performance training and inference. Chainer core team of developers work at Preferred Networks, Inc. a ML startup with engineers mainly from the University of Tokyo. Chainer supports CNN, RNN DL architectures. DL frameworks are usually built based on the “Define-and-Run” scheme i.e. at the beginning a computational graph is constructed and a network is statically defined and fixed. Chainer’s design is based on the principle “Define-by-Run” i.e. network is not predefined at the beginning, but is dynamically defined on-thefly (i.e. dynamic computational graph or DCG). Chainer contains libraries for industrial applications e.g. ChainerCV (library for DL in Computer Vision), ChainerRL (deep reinforcement learning library built on top of Chainer), ChainerMN (scalable multi-node distributed DL with Chainer where linear speed-up up to 128 GPUs), etc. Intel Chainer with MKL-DNN backend is aprox. 8.35 times faster than NumPy backend (according to Chainer benchmarks).
Strong points
Weak points