[https://nicholastsmith.wordpress.com/2016/04/20/stock-market-prediction-using-multi-layer-perceptrons-with-tensorflow/] In this post a multi-layer perceptron (MLP) class based on the TensorFlow library is discussed. The class is then applied to the problem of performing stock prediction given historical data. ## Data SetupThe data used in this post was collected from finance.yahoo.com. The data consists of historical stock data from Yahoo Inc. over the period of the 12th of April 1996 to the 19th of April 2016. The data can be downloaded as a CSV file from the provided link. To pre-process the data for the neural network, first transform the dates into integer values using LibreOffice’s DATEVALUE function. A screen-shot of the transformed data can be seen as follows:
For simplicity sake, the “High” value will be computed based on the “Date Value.” Thus, the goal is to create an MLP that takes as input a date in the form of an integer and returns a predicted high value of the Yahoo Inc. stock price for that day. With the date values saved the spreadsheet, next the data is loaded into python. To improve the performance of the MLP, the data is first scaled so that both the input and output data have mean 0 and variance 1. This can be accomplished as follows (take note that “Date Value” is in column index 1 and “High” is in column index 4): The produced plot is as follows:
Next, an MLP is constructed and trained on the scaled data. ## Creating the MLPThe MLP class that will be used follows a simple interface similar to that of the python scikit-learn library. The source code is available here. The interface is as follows: The first step is to create an MLPR object. This can be done as follows: With this code, an MLPR object will be initialized with the given layer sizes, a training iteration limit of 1000, an error tolerance of 0.40 (for the RMSE), regularization weight of 0.001, and verbose output enabled. The source code for the MLPR class shows how this is accomplished. As seen above, tensorflow placeholder variables are created for the input (x) and the output (y). Next, tensorflow variables for the weight matrices and bias vectors are created using the _CreateVars() function. The weights are initialized as random normal numbers distributed as , where is the fan-in to the layer. Next, the MLP model is constructed using its definition as discussed in an earlier post. After that, the loss and regularization functions are defined as the L2 loss. Regularization penalizes larger values in the weight matrices and bias vectors to help prevent over-fitting. Lastly, tensorflow’s AdamOptimizer is employed as the training optimizer with the goal of minimizing the loss function. Note that at this stage the learning has not yet been done, only the tensorflow graph has been initialized with the necessary components of the MLP. Next, the MLP is trained with the Yahoo stock data. A hold-out period is used to assess how well the MLP is performing. This can be accomplished as follows: When the fit function is called, the actual training process begins. First, a tensorflow session must be created and all variables defined in the constructor must be initialized. Then, training iterations are performed up to the iteration limit provided, the weights are updated, and the error is recorded. The feed_dict parameter specifies the values of our inputs (x) and outputs (y). If the error falls below the tolerance level, training is completed, otherwise the maximum number of iterations is exhausted. With the MLP network trained, prediction can be performed and the results plotted using matplotlib.
As can be seen, the MLP smooths the original stock data. The amount of smoothing is dependent upon the MLP parameters including the number layers, the size of the layers, the error tolerance, and the amount of regularization. In practice it requires a lot of parameter tuning in order to get decent results from a neural network. |