Research‎ > ‎

Geraud Nangue Tasse

Parallel Training Of Neural Networks
Supervised by: Mr. James Connan


Neural Networks is one of the most vastly used AI (machine learning) techniques. Unfortunately it relies greatly on high processing power to achieve great results within reasonable time frame. It is heavily used by large organizations such as Google, Amazon and Facebook who have the necessary computing power but less used by smaller organizations and regular developers who can only afford to use it for the less computationally intensive tasks. 

The question then is how to make neural networks running on regular devices ( such as phones, laptops, desktops) perform comparatively as good as the ones currently running on the supercomputers and servers of large organizations. Since Graphical Processing Units (GPUs) are now quite common in most devices or can be acquired at reasonable budgets, this project attempts to solve the problem by accelerating the neural networks training process using a novel parallel algorithm.
ć
Geraud Nangue Tasse,
Sep 15, 2018, 3:01 PM
ć
Geraud Nangue Tasse,
Sep 15, 2018, 3:01 PM
ć
Geraud Nangue Tasse,
Sep 15, 2018, 3:01 PM
Ċ
Thesis.pdf
(5418k)
Geraud Nangue Tasse,
Sep 25, 2018, 9:02 AM
Ċ
Geraud Nangue Tasse,
Sep 15, 2018, 2:57 PM
Ċ
Geraud Nangue Tasse,
Apr 9, 2018, 5:18 AM
Comments