Reinforcement Learning for systems: For performing various predictive analytics tasks for real-time mission-critical applications, Federated Learning (FL) have emerged as the go-to machine learning paradigm for its ability to leverage perform machine learning workloads on resource-constrained edge devices. For such FL applications working under stringent deadlines, the overall local training time needs to be minimized, which consists of the retrieval delay, i.e., the delay in fetching the data from the IoT devices to the FL clients as well as the time consumed in training the local models. Since the latter component is mostly uniform among the FL clients, we have to minimize the retrieval delay to reduce the local training time. To that end, we formulate the Client Assignment Problem (CAP) as an intelligent assignment of selected IoT devices to each FL client such that the FL client may retrieve training data from these IoT devices with minimal retrieval delay. CAP must perform assignments for each FL client considering its relative distances from each IoT device such that each FL client does not experience an arbitrarily large retrieval delay in fetching data from a remotely placed IoT device. We prove that CAP is NP-Hard, and as such, obtaining a polynomial time solution to CAP is infeasible. To deal with the challenges faced by such heuristics approaches, we propose Deep Reinforcement Learning-based algorithms to produce near-optimal solution to CAP. We demonstrate that our algorithms outperform the state of the art in reducing the local training time, while producing a near-optimal solution.