Torch is a scientific computing framework with wide support for ML algorithms based on the Lua programming language [Torch]. It has been under active development since 2002. The original authors are Ronan Collobert, Koray Kavukcuoglu, Clement Farabet [Collobert 2002].
Torch has been developed using an object-oriented paradigm and implemented in C++. Nowadays, its API is also written in Lua language (Lua is a multi-paradigm scripting language created in 1993 by R. Lerusalimschy, L. de Figueiredo, and W. Celes at the University of Rio de Janeiro). Lua language is used as a wrapper for optimized C/C++ and CUDA code. Its core is made up by tensor library which provides both CPU and GPU backends. Current version Torch7, Tensor library provides a lot of classic operations (including linear algebra operations), efficiently implemented in C, leveraging SSE instructions on Intel’s platforms and optionally binding linear algebra operations to existing efficient BLAS/Lapack implementations (like Intel MKL) [Collobert 2011]. The framework supports parallelism on multi-core CPUs via OpenMP, and on GPUs via CUDA. It is aimed on large-scale learning (speech, image, and video applications), and affords supervised learning, unsupervised learning, reinforced learning, NNs, optimization, graphical models, image processing. Torch is supported and used by Facebook, Google, DeepMind, Twitter, and many other organizations. The framework is freely available under a BSD license.
Strong points
Weak points