The software setup plays a pivotal role in ensuring the smooth integration of the federated learning experiment across various hardware platforms. The NI USRP 2900 is managed through LabVIEW, a graphical programming environment tailored for controlling hardware and communication protocols. LabVIEW facilitates the reliable handling of physical layer operations, ensuring accurate data transmission and reception throughout the federated learning process.
The XBee S2C devices are controlled using XCTU, a tool that enables seamless network synchronization and management. XCTU ensures effective communication between XBee modules and other network devices, supporting real-time data exchange within the federated learning framework.
Python is used to generate and manage the machine learning model parameters, serving as the core language for training and updating the ML models. Python's extensive machine learning libraries allow for efficient implementation of federated learning algorithms, ensuring proper training and optimization of models across the distributed client devices.
These software components—LabVIEW, XCTU, and Python—work together in harmony, forming an integrated system that supports the real-time execution of federated learning across bandwidth-constrained wireless networks, ensuring the efficiency and effectiveness of the demonstration.
This project presents a comprehensive comparison of IID and non-IID scenarios in Fig. 1(left). In the IID case, the model achieves an accuracy of 99%, while, as expected, the accuracy drops to 90% under severe heterogeneity in the non-IID setting. Fig. 1 (right) illustrates the effect of transmission power on stability, accuracy, and the rate of convergence in the non-IID scenario. Higher transmission power demonstrates improved convergence, highlighting its influence on system performance.
Fig. 1: (Left)Accuracy performance in the IID and non-IID setting & (right) for different transmission powers.
This project ensures the development of a well-generalized global model by testing cross-accuracy. Since Client 1 is trained on classes 0 and 1, it is tested on its ability to predict classes 2 and 3. Similarly, Client 2 is evaluated on classes 0 and 1. As shown in Fig.2, the confusion matrix indicates a cross-accuracy of 87% for Client 1 and 90% for Client 2, confirming that the global ML model is effectively generalized across all classes.
Fig. 2: Confusion Matrix for Cross-accuracy.
[1]. Muttepawar, V. L., Mehra, A., Shaban, Z., Prasad, R., & Harshan, J. (2024, January). Federated Learning for Wireless Applications: A Prototype. In 2024 16th International Conference on COMmunication Systems & NETworkS (COMSNETS) (pp. 300-302). IEEE.