PyTorch is a Python library for GPU-accelerated DL [PyTorch]. The library is a Python interface of the same optimized C libraries that Torch uses. It has been developed by Adam Paszke, Sam Gross, Soumith Chintala, Gregory Chanan (Facebook's AI research group) since 2016.
PyTorch is written in Python, C, and CUDA. The library integrates acceleration libraries such as Intel MKL and NVIDIA (CuDNN, NCCL). At the core, it uses CPU and GPU Tensor and NN backends (TH, THC, THNN, THCUNN) written as independent libraries on a C99 API. PyTorch supports tensor computation with strong GPU acceleration (it provides Tensors that can run either on the CPU or the GPU, highly accelerating compute), and DNNs built on a tape-based autograd system. It has become popular by allowing certain complex architectures to be built easily [Deeplearning4j, 2018]. Typically, changing the way a network behaves means to start from scratch. PyTorch uses a technique called reverse-mode auto-differentiation, which allows to change the way a network behaves with small effort (i.e. dynamic computational graph or DCG). It is mostly inspired by autograd [autograd], and Chainer [Chainer]. The library was used by both the scientific and the industrial community. An engineering team at Uber has built Pyro, a universal probabilistic programming language using PyTorch as its back end. A DL training site fast.ai announced the future switching to be based on PyTorch rather than Keras-TensorFlow [Patel, 2017]. The library is freely available under a BSD license and it is supported by Facebook, Twitter, NVidia, and many other organizations
Strong points
Weak points