Recently, newly distributed frameworks have emerged to address the scalability of algorithms to Big Data analysis using the MapReduce programming model, being Apache Hadoop and Apache Spark the two most popular implementations. The main advantages of these distributed systems are their elasticity, reliability, and transparent scalability in a user-friendly way. They are intended to provide users with easy and automatic fault-tolerant workload distribution without the inconveniences of taking into account the specific details of the underlying hardware architecture of a cluster. These popular distributed computing frameworks and GPUs are not mutually exclusive technologies, although they aim at different scaling purposes [Cano 2017]. These technologies can complement each other and target complementary computing scopes such as ML and DL [Skymind 2017], however here is still a lot of limitations and challenges.