Please refer below for summary descriptions for past projects with so. And if you're keen for me to share something that is not available here, then please contact me at anis.u.rahman "at" jyu "dot" fi.
Smart cities are based on connected devices generating large quantities of data every instant. This data can be stored at a nearby edge location for initial processing but later sending it to the backend data centers for storage and further analysis consumes considerable network bandwidth. In this paper, we propose a large-scale data migration framework using vehicles. The framework uses a neural network to identify suitable vehicles as data mules, ones moving towards the data destination. Potentially, reducing the load from backend networks in terms of bandwidth usage and overall energy consumption. We compare the framework with data transfers using the traditional internet and an approach without machine intelligence. The proposed framework performs well in terms of data loss, transfer time, energy and CO 2 emissions. From experiments, we demonstrate that the approach achieves 67% success rate with data transfers 193× faster than the average internet bandwidth of 21.28 Mbps. Moreover, the resulting CO 2 emissions for 30 TB data transfers stood at 6.403 kg, which is significantly lower compared to 1172.8 kg for the internet.
CrowdFix includes 434 videos with diverse crowd scenes, containing a total of 37,493 frames and 1,249 seconds. The diverse content refers to different crowd activities under three distinct categories - Sparse, Dense Free Flowing and Dense Congested. All videos are at 720p resolution and 30 Hz frame rate. For monitoring the eye movements, an EyeTribe eye tracker was used in our experiment. 26 participants (10 males and 16 females), aging from 17 to 40, participated in the eye-tracking experiment. All participants were non-experts for the eye-tracking experiment, with normal/corrected-to-normal vision. During the experiment, the distance between subjects and the monitor was fixed at 60 cm. Before viewing videos, each subject was required to perform a 9-point calibration for the eye tracker. After the calibration, the subjects were asked to free-view videos displayed in an MTV style. Finally, fixations of all 32 subjects on 538 videos were collected for our eye-tracking database.
Parallel discrete event simulation frameworks have been widely used to analyze the performance of traditional applications under different scenarios. The existing frameworks are designed to work on a cluster and cloud-based computing environments. With the current advances in the internet of things, there is a strong need to revamp such traditional frameworks and make use of the smart connected-devices as an underlying infrastructure to perform simulations. In this study, we propose a new simulation framework, which has been specifically designed to work with diverse heterogeneous devices. The framework allows these heterogeneous mobile devices to participate in a distributed simulation while managing network latency, using device profiles that are maintained by the simulation framework. Moreover, in the proposed framework, random and context-aware simulation task distributions have been explored to manage the devices’ sporadic connectivity. Evaluation results using the well-known PHOLD benchmark demonstrate a gain in the overall efficiency of the proposed simulation system.
Since the inception of smart cities, vehicular networks has introduced new dimensions for delay-sensitive applications. The use of backend cloud data centers is no longer a viable solution due to incurred latency. Thus, to support such applications, computing devices are placed at edge locations to reduce communication delay improving the quality of service. However, in a congested environment, these locations become overloaded due to a large number of computing requests resulting in degradation of the overall system performance. In this paper, we present a computing framework addressing this problem by introducing the concept of resource allocation and provisioning in the form of flying fog units. The lease period for the allocated resources is defined based on preemptive resource provisioning model. The results demonstrate the effectiveness of the proposed framework compared to baseline approaches with wait times reduced by 9% while improving the system efficiency by 9%.
The concept of smart farming has led to the use of technology to enhance agricultural productivity. With access to low-cost sensors and management systems, more farmers are adopting this technology to achieve sustainable growth. However, in literature, there are no simulation platforms to help researchers and users understand sensor deployment, and data collection and processing. In this paper, we propose a framework designed to provide a complete farming ecosystem. The toolkit facilitates users to simulate custom farming scenarios, specifically to identify sensor placement, coverage area, line of sight deployment, data gathering through relay mechanism or airborne systems, mobility models for mobile nodes, energy models for on-ground sensors and airborne vehicles, and backend computing support using fog computing paradigm. Furthermore, in most of the existing works network parameters are ignored, which can impact the overall performance of any deployed system. Therefore, the proposed framework also provides a benchmark in terms of transmission delay, packet delivery ratio, energy consumption, and system resources usage.
Recent advancement in communication among smart devices, vehicular fog computing introduces new dimensions for delay-sensitive applications. The traditional computing paradigm to install edge locations is no longer viable due to incurred latency while decision making, especially in delay-sensitive applications. In this paper, we propose a vehicle-to-vehicle task offloading framework that allows vehicles to utilize computation resources available at nearby vehicles. The objective is to bring fog computing near vehicles to achieve computational efficiency and improve quality of service. To overcome mobility issues, we implement Context-aware opportunistic offloading schemes based on speed, direction, and locality of vehicles. The schemes are compared to random offloading mechanism in terms of efficiency, task completion, failure rate, workload distribution, and waiting time. The results demonstrate a significant reduction in failure rate up to 10% with more tasks completed on vehicles within direct communication range.
Streaming large amount of data to cloud data centers can cause network congestion resulting in higher network and energy consumption. The concept of fog computing is introduced to reduce workload from backbone networks and support delay-sensitive Internet of things(IoT) applications. The concept places computing, storage and network services much closer to the source. For better understanding and optimum fog resource allocation, researchers have developed fog-based simulators. Unfortunately, most of these simulators lack core features like network delay, latency, packet error rate, and distributed fog node management. In this paper, we proposed a fog framework termed as xFogSim to support latency-sensitive applications at fog layer with a multi-objective optimization to trade off cost, availability, and performance among the fog federation. Furthermore, the framework provides locality-aware distributed broker node management that allows borrowing resources from nearby fog locations to meet service requirements. The framework is benchmarked in terms of multiple performance measures. The results show that the framework is lightweight and configurable handling a large number of user requests using dynamic resource provisioning across the fog federation.
Cloud is a multi-tenant paradigm providing resources as a service. With its easily available computing infrastructure, researchers are adopting cloud for experimental purposes. However, using the platform efficiently for parallel and distributed simulations comes with new challenges. One such challenge is that the simulations comprise logical processes executing on distributed nodes, traditionally, organized in a sequential pattern. This placement strategy leads to delays as frequently communicating processes might get placed farther from one another. In this paper, we proposed a framework to facilitate implementation and evaluation of process placement algorithms inside a three-tier cloud data center. Furthermore, we used the framework to test different process placement strategies based on classical clustering techniques, as well as, our proposed efficient locality-aware placement algorithm. Our evaluation results show a performance gain of 14.5% for the algorithm in comparison with sequential process placement used in practice.
Existing simulators are designed to simulate a few thousand nodes due to the tight integration of modules. Thus, with limited simulator scalability, researchers/ developers are unable to simulate protocols and algorithms in detail, although cloud simulators provide geographically distributed data centers environment but lack the support for execution on distributed systems. In this paper, we propose a distributed simulation framework referred to as CloudSimScale. The framework is designed on top of highly adapted CloudSim with communication among different modules managed using IEEE Std 1516 (high-level architecture). The underlying modules can now run on the same or different physical systems and still manage to discover and communicate with one another. Thus, the proposed framework provides scalability across distributed systems and interoperability across modules and simulators.
Smart cities and the Internet of Things have enabled the integration of communicating devices for efficient decision-making. Notably, traffic congestion is one major problem faced by daily commuters in urban cities. In developed countries, specialized sensors are deployed to gather traffic information to predict traffic patterns. Any traffic updates are shared with the commuters via the Internet. Such solutions become impracticable when physical infrastructure and Internet connectivity are either non-existent or very limited. In case of developing countries, no roadside units are available and Internet connectivity is still an issue in remote areas. In this article, we propose an intelligent vehicular network framework for smart cities that enables route selection based on real-time data received from neighboring vehicles in an ad hoc fashion. We used Wi-Fi Direct–enabled Android-based smartphones as embedded devices in vehicles. We used a vehicular ad hoc network to implement an intelligent transportation system. Data gathering and preprocessing were carried on different routes between two metropolitan cities of a developing country. The framework was evaluated on different fixed route-selection and dynamic route-selection algorithms in terms of resource usage, transmission delay, packet loss, and overall travel time. Our results show reduced travel times of up to 33.3% when compared to a traditional fixed route-selection algorithm.
With the advancement in communication technologies, Internet of vehicles presents a new set of opportunities to efficiently manage transportation problems using vehicle-to-vehicle communication. However, high mobility in vehicular networks causes frequent changes in network topology, which leads to network instability. This frequently results in emergency messages failing to reach the target vehicles. To overcome this problem, we propose a data dissemination scheme for such messages in vehicular networks, based on clustering and position-based broadcast techniques. The vehicles are dynamically clustered to handle the broadcast storm problem, and a position-based technique is proposed to reduce communication delays, resulting in timely dissemination of emergency messages. The simulation results show that the transmission delay, information coverage, and packet delivery ratios improved up to 14%, 9.7%, and 5.5%, respectively. These results indicate that the proposed scheme is promising as it outperforms existing techniques.
With the advancement in technology and inception of smart vehicles and smart cities, every vehicle can communicate with the other vehicles either directly or through ad-hoc networks. Therefore, such platforms can be utilized to disseminate time-critical information. However, in an ad-hoc situation, information coverage can be restricted in situations, where no relay vehicle is available. Moreover, the critical information must be delivered within a specific period of time; therefore, timely message dissemination is extremely important. The existing data dissemination techniques in VANETs generate a large number of messages through techniques such as broadcast or partial broadcast. Thus, the techniques based on broadcast schemes can cause congestion as all the recipients re-broadcast the message and vehicles receive multiple copies of same messages. Further, re-broadcast can degrade the coverage delivery ratio due to channel congestion. Moreover, the traditional cluster-based approach cannot work efficiently. As clustering schemes add additional delays due to communication with cluster head only. In this paper, we propose a data dissemination technique using a time barrier mechanism to reduce the overhead of messages that can clutter the network. The proposed solution is based on the concept of a super-node to timely disseminate the messages. Moreover, to avoid unnecessary broadcast which can also cause the broadcast storm problem, the time barrier technique is adapted to handle this problem. Thus, only the farthest vehicle rebroadcasts the message which can cover more distance. Therefore, the message can reach the farthest node in less time and thus, improves the coverage and reduces the delay. The proposed scheme is compared with traditional probabilistic approaches. The evaluation section shows the reduction in message overhead, transmission delay, improved coverage, and packet delivery ratio.
Players in computer games continue to rely on assistance for navigation in the game environment, even after hours of gameplay. This behavior is in contrast to the real world where spatial knowledge of an unfamiliar environment develops with experience and reliance on navigational assistance declines. The slow development of spatial knowledge in virtual environments can be attributed to the use of turn-by-turn navigational aids. In the context of computer games, the most common form of these aids is a “mini-map.” The use of such aids in computer games is necessitated by the demands of immersion and entertainment and, hence, they cannot be entirely discarded. The need, then, is to design navigational aids that support, rather than inhibit, the development of spatial knowledge. The authors propose landmark-based verbal directions as an alternative to mini-maps and report the results of a randomized comparative study conducted to examine the impact of mini-maps and their proposed aid on the development of spatial knowledge in a virtual urban environment. The results confirm the superiority of their verbal aid in terms of spatial knowledge, while mini-maps perform better with respect to navigational efficiency. The authors hope that this study provides a first step toward defining design parameters that govern the tradeoff between navigational efficiency and spatial learning.
In this work, we propose a graph-based superpixel segmentation technique to perform spatiotemporal oversegmentation of videos. The generated superpixels are post-processed by applying a straightforward threshold-based foreground separation model. These superpixels are used in a conditional random field, and a potential function is defined, which is solved using energy minimization techniques to produce a final segmentation. Experiments on two datasets containing over 24 videos demonstrate that our method produces competitive or better results for the video object segmentation task over the state-of-the-art algorithms.
Multi-object video segmentation and multi-object tracking are very similar in the aspect that both determine the locations and maintain the identities of the objects of interest (targets) in each frame of the video. Our approach takes advantage of this fact and uses the strengths of one task to improve the accuracy of the other. In our framework, the multi-object tracking and segmentation modules initially produce results on our dataset independently. The tracking module enforces higher-order smoothness constraints on the object trajectories and uses Lagrangian relaxation to get an iterative solution method. The segmentation module forms superpixels through clustering, trains a linear SVM using Lab color to obtain the foreground and background segmentation and assigns ID labels based on color and optical flow. The results of these two modules are then jointly processed and updated. The locations of the tracking bounding boxes are refined with the help of the segmentation results, so that they are more precisely centered on the targets. The tracking module is more accurate in terms of ID assignment and hence, its results are used to correct errors in ID labeling in the segmentation module. Both modules identify and add any target detections they initially missed to their results using the results of the other component. Hence, this joint processing increases the accuracy of both the tracking and the segmentation results as can be seen from our experimental results. Our approach is comparable to state-of-the-art tracking and segmentation techniques.
Faces play an important role in guiding visual attention, and thus, the inclusion of face detection into a classical visual attention model can improve eye movement predictions. In this study, we proposed a visual saliency model to predict eye movements during free viewing of videos. The model is inspired by the biology of the visual system and breaks down each frame of a video database into three saliency maps, each earmarked for a particular visual feature. (a) A ‘static’ saliency map emphasizes regions that differ from their context in terms of luminance, orientation and spatial frequency. (b) A ‘dynamic’ saliency map emphasizes moving regions with values proportional to motion amplitude. (c) A ‘face’ saliency map emphasizes areas where a face is detected with a value proportional to the confidence of the detection. In parallel, a behavioral experiment was carried out to record eye movements of participants when viewing the videos. These eye movements were compared with the models’ saliency maps to quantify their efficiency. We also examined the influence of center bias on the saliency maps and incorporated it into the model in a suitable way. Finally, we proposed an efficient fusion method of all these saliency maps. Consequently, the fused master saliency map developed in this research is a good predictor of participants’ eye positions.
The human vision has been studied deeply in the past years, and several different models have been proposed to simulate it on computer. Some of these models concerns visual saliency which is potentially very interesting in a lot of applications like robotics, image analysis, compression, video indexing. Unfortunately they are compute intensive with tight real-time requirements. Among all the existing models, we have chosen a spatiotemporal one combining static and dynamic information. We propose in this paper a very efficient implementation of this model with multi-GPU reaching real-time. We present the algorithms of the model as well as several parallel optimizations on GPU with higher precision and execution time results. The real-time execution of this multi-path model on multi-GPU makes it a powerful tool to facilitate many vision related applications.
Humans seamlessly perceive a massive amount of information while observing a scene. Though humans recognize real-world scenes easily and accurately but its not the same for computers due to scene images variability, ambiguity, and diverse illumination and scale conditions. Scene classification is a fundamental problem which provides contextual information to guide other processes, such as browsing, content-based image retrieval and object recognition. A baseline model based on traditional bag of words model is built to better evaluate the proposed solution. Model based on the idea of fine to coarse category mappings is proposed, whose information is combined with the fusion of feature descriptors resulting in a single feature representation. This additional information enhances performance by exploiting hierarchical relationship among the scene categories. Effectiveness of the proposed approach is validated using different evaluation metrics. Proposed model performs considerably better compared to the given baseline as well as several state-of-the-art methods. Proposed framework ensures appropriate balance between time and accuracy of the model.
Images have always had a significant effect on their viewers at an emotional level by portraying so much in a single frame. These emotions have also been involved in human decision making. Machines can also be made emotionally intelligent using ‘Affective Computing’, giving them the ability of decision making by involving emotions. Emotional aspect of machine learning has been used in areas like E-Health and E-learning etc. In this paper, the emotional aspect of machines has been used to perform Geo-tagging of an image. The proposed solution concentrates on a hybrid approach towards Affective Image Classification where the Elements-of-Art based emotional features (EAEF) and Principles-of-Art based emotional features (PAEF) are combined. Firstly, experiments are performed on these two sets of features individually. Then, these two sets are combined to obtain a Hybrid feature vector and same experiments are performed on this feature vector. On comparison of results, it is indicated that the hybrid approach gives better accuracy then either individual approach. Images in this research work are downloaded from Yahoo Flickr Creative Commons 100 Million (YFCC100M) dataset which contains the co-ordinates of millions of images and are free to use.