1


Getting Started with Torch

Torch7 provides a Matlab-like environment for state-of-the-art machine learning algorithms. It is easy to use and provides a very efficient implementation, thanks to an easy and fast scripting language, LuaJIT and an underlying C/OpenMP/CUDA implementation.

At this time, Torch7 is maintained by Ronan Collobert, Clement Farabet Koray Kavukcuoglu and Soumith Chintala.

Getting the Code for this Tutorial

The complete code for this series of tutorials is available on GitHub, here.

To get it on your machine, if you have git installed on your system, type the following line in your shell:

$ git clone https://github.com/torch/tutorials.git

If you don't have git simply click the "ZIP" button on the GitHub page.

Installing Torch

For these tutorials, we assume that Torch is already installed, as well as extra packages (image, nnx, unsup). To install Torch on your machine, follow the instructions available here

Running Code

Torch7 relies on the LuaJIT interpreter, so the easiest way to get started is to start an interpreter, and start typing commands:

$ luajit
LuaJIT 2.0.2 -- Copyright (C) 2005-2013 Mike Pall. http://luajit.org/
JIT: ON CMOV SSE2 SSE3 SSE4.1 fold cse dce fwd dse narrow loop abc sink fuse
>
> a = 10
> =a
10
> print 'something'
something
> = 'something'
something
>

The raw LuaJIT interpreter is a bit dry (no completion, no help, nothing preloaded...). We offer a package called trepl (a REPL for Torch), which provides lots of goodies. To use it, simply replace luajit by th:

th
>

th provides a nicer print, line completion, access to bash commands, ...

By default, the interpreter just preloads torch. Extra packages, such as image, and nn, must be required manually:

> require 'nn'
> require 'image'

Rendering / Displaying

The iTorch package allows easy rendering of images, using a standard web browser as a graphics client. Follow the instructions on this page to install it. Make sure that you have IPython installed before installing iTorch.

Starting a session like this will start an iTorch notebook interface, and let you visualize your plots / images in your browser, at localhost:8888:

itorch notebook

Now start a new notebook to run your code. You can type your code into the code cells, and press [Shift] + [Enter] keys together to execute your code.

> i = image.lena()
> itorch.image(i)

In these tutorials, we'll be interested in visualizing internal states, and convolution filters:

> n = nn.SpatialConvolution(1,64,16,16)
> itorch.image(n.weight)

> n = nn.SpatialConvolution(1,16,12,12)
> res = n:forward(image.rgb2y(image.lena()))
> itorch.image(res:view(16,1,501,501))

Last, but not least, you can of course save code in a file, which should use the .lua extension, and execute it either from your shell prompt, or from the LuaJIT interpreter. All the code in this file has been written to a file called getstarted.lua. You can run it like this:

$ th getstarted.lua

or from the Torch interpreter:

> dofile 'getstarted.lua'

Getting help

Torch's documentation is work in progress, but you can already get help for most function provided in the official packages (torch, nn, gnuplot, image). Documentation is available in the Git repos of each package: torch, nn, ...

Now a quick way to learn functions, and explore packages, is to use TAB-completion, that is, start typing something in the Torch interpreter, and then hit TAB twice:

> torch. + TAB
torch.ByteTensor.            torch.include(
torch.CharStorage.           torch.initialSeed(
torch.CharTensor.            torch.inverse(
...

This will give you a list of all functions provided in the _torch_ package. You can then get inline help for a specific function like this:

> help(torch.randn)
 
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
[res] torch.randn( [res,] m [, n, k, ...])
 y=torch.randn(n) returns a one-dimensional tensor of size n filled
with random numbers from a normal distribution with mean zero and variance
one.
 y=torch.randn(m,n) returns a mxn tensor of random numbers from a normal
distribution with mean zero and variance one.
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Going Further

Now that you have all the basic pieces, I invite you to take a look at Lua, LuaJIT and Torch's documentation.

The Torch core is a general numeric library, and although most early packages were built around neural network training, lots of 3rd packages have developed that provide code for all sorts of things, including image processing, video processing, graphical models (CRFs, ...), parallel computing, camera interfaces, ...

A complete list of packages we support can be found here. The Torch ecosystem relies on Luarocks for package distribution. Each package is distributed via a Git repository. Main packages are hosted on the main Torch portal.

(Easy) Exercise

In any machine learning task, data plays a central role. The most basic thing to do when getting new data is to ensure that it is properly normalized. This is valid for any ML problem. You will find the MNIST dataset already stored into the Torch file format here. You can download it like this:

$ wget https://s3.amazonaws.com/torch7/data/mnist.t7.tgz
$ tar xvf mnist.t7.tgz

You can then load both the training and test data like this:

t7> train = torch.load('mnist.t7/train_32x32.t7', 'ascii')
t7> test = torch.load('mnist.t7/test_32x32.t7', 'ascii')
t7> = train
{[data]   = ByteTensor - size: 60000x1x32x32
 [labels] = ByteTensor - size: 60000}

Data needs to be normalized. Verify the initial data range, and write the necessary code to insure that the data has zero mean and unit norm. Depending on the task, you might want to normalize the data globally (at the level of the training set) or independently for each patch, or for each feature (pixel in this case).

You will need to be able to slice these arrays, to make use of efficient numeric routines (ala Matlab). If you can't figure out how to do it, check out this script: ../1_supervised/A_slicing.lua.

Last little challenge, try to display a subset of the training images. By now you should have seen enough to be able to do that.



Comments