Research

Distributed Tensorflow with Docker

Demonstrates an implementation which improves Tensorflow performance by segmenting distributed Tensorflow and Docker on the same multicore server versus using Tensorflow natively. Open-Sourced Tensorflow as well as Google’s former project DistBelief exploited model and data parallellism via Distributed training, implementing Docker and ClusterSpec and further advantage for non-GPU learning clusters. This Docker Tensorflow technique displays a measured improvement versus native Tensorflow and differs from other methods of distributing like Intel’s YARN, Yahoo’s Spark, TensorFrames and Tensorflow on Kubernetes.

Percent decrease for the top three times when comparing both distributed tests versus the baseline test was between 18% and 24% for a quad-core 6th generation laptop and between 78% and 84% for a 24-core cluster.

[Github] Rough Draft of Research Paper