Geraud Nangue Tasse

Parallel Training Of Neural Networks

Supervised by: Mr. James Connan

Neural Networks is one of the most vastly used AI (machine learning) techniques. Unfortunately it relies greatly on high processing power to achieve great results within reasonable time frame. It is heavily used by large organizations such as Google, Amazon and Facebook who have the necessary computing power but less used by smaller organizations and regular developers who can only afford to use it for the less computationally intensive tasks. 

The question then is how to make neural networks running on regular devices ( such as phones, laptops, desktops) perform comparatively as good as the ones currently running on the supercomputers and servers of large organizations. Since Graphical Processing Units (GPUs) are now quite common in most devices or can be acquired at reasonable budgets, this project attempts to solve the problem by accelerating the neural networks training process using a novel parallel algorithm.