11th International Conference on Ad hoc, Sensor & Ubiquitous

Computing (ASUC 2020)


July 11~12, 2020, Toronto, Canada

Accepted Papers


Sorting system for plastic garbage based on artificial intelligence

Janusz Bobulski and Mariusz Kubanek, Department of Computer Science, Czestochowa University of Technology, Poland

ABSTRACT

An important element of a complex recycling process that is an integral part of municipal waste management is the sorting of materials that can be re-used. Manual sorting of garbage is a tedious and expensive process, which is why scientists create and study auto-mated sorting techniques to improve the overall efficiency of the recycling process. An im-portant aspect here is the preliminary division of waste into various groups, from which de-tailed segregation of materials will take place. One of the most important contemporary en-vironmental problems is the recycling and utilization of plastic waste. The main problem under consideration in this article is the design of an automatic waste segregation system. A deep convoluted neural network will be used to classify images.

KEYWORDS

Convolutional Neural Network, Deep Learning, image processing, waste man-agement, environmental protection, recycling


Digital Image Forensics using Hexadecimal Image Analysis

Gina Fossati, Anmol Agarwal, Ebru Celikel Cankaya, Department of Computer Science, University of Texas at Dallas, Richardson, TX, USA

ABSTRACT

Digital forensics is gaining increasing momentum today thanks to rapid developments in data editing technologies. We propose and implement a novel image forensics technique that incorporates hexadecimal image analysis to detect forgery in still images. The simple and effective algorithm we develop yields promising results with zero false positives. Moreover, it is comparable to other known image forgery detection algorithms w.r.t. runtime performance.

KEYWORDS

Forgery detection, image manipulation


Intelligent Recommender System Based on Sentiment Analysis

Mohamed Yacine GHERAIBIA1, Meryem AMMI2, Sohag Kabir3, Belkacem Chikhaoui4, Richard Hotte5, 1LICEF Research Institute, University of Teluq, Montreal, Canada, 2Naif Arab university for security sciences, Riyadh, KSA, 3Department of Computer Science and Technology, University of Bradford, UK, 4LICEF Research Institute, University of Teluq, Montreal, Canada and 5LICEF Research Institute, University of Teluq, Montreal, Canada

ABSTRACT

Recommender systems (RS) are widely used in web applications such as e-commerce, marketing campaigns and online publicity. The RS’s main objective is to provide recommendation to the users about products and content relevant to their interest. By the use of different algorithms and techniques, RS filters the available items to select the most relevant items to users. The provided recommendations are important for both users and organizations’ business. Thus, enhancing the recommendations to fall into the users’ interest is a challenging task. In this paper, we propose a new approach for enhancing the recommendations provided by RS by combining rating scores and the sentiment analysis of users’ reviews to generate new rating scores. The new raring scores are used to recommend items to users through the principal component analysis (PCA) and K-means methods. We experimentally demonstrated through extensive experiments the effectiveness of the proposed approach in enhancing the quality of recommendations.

KEYWORDS

Recommender Systems, Sentiment Analysis, Machine Learning, Information technology, Collaborative Filtering


Message-driven Regression Test Automation Framework for Client-server Applications

Emine Dumlu Demircioglu and Oya Kalipsiz, Department of Computer Engineering, Yildiz Technical University, Istanbul, Turkey

ABSTRACT

Software testing is a not only a major activity with which to catch bugs in software products during the software development life cycle but is also quite challenging and currently gaining more importance. Testing can be done either manually or automatically. The increase in software size and shortened development time has posed enormous challenges in manual testing, while the development of automated testing technology has encouraged the development within the software testing industry. Repetitive manual testing for functional correctness is especially very difficult to apply in large-scale business client-server applications such as banking applications, financial applications, trading systems or large-scale real-time applications. In order to test such large-scale business software systems, there is a need for many different test scenarios. However, these test scenarios cannot be performed manually with precision within a given short period of time due to human error. Hence, manual testing is less reliable, highly error-prone and time consuming. Therefore, especially in repetitive manual tests, it is vital to perform test automation from test case generation to execution and defect reporting. So we decided to automate the manual testing process for the client-server applications. To be able to this, we divided our studying into the three parts: The first part is automatically test case generation, the second part is automatically test execution of created test cases, and the last one is defect reporting. In this paper, by using captured network packets between client-server communications over networks, we propose a type of message-driven regression test case generation (automatically extracted with reverse engineering mechanisms) framework to automate regression test case generation process mainly targeting applications with client-server architecture. The studying is motivated by the lack of regression test automation framework in a specific domain: client-server applications such as banking applications and trading systems in the stock exchange. To validate the effectiveness of the proposed framework, we administered it to a real-world financial trading system.

KEYWORDS

Software Testing, Test Automation Framework, Test Case Generation


Comparison of GNSS Patch Versus GPS L1 Patch Antenna Performance Characteristic

Gholam Aghashirin1, Hoda S. Abdel-Aty-Zohdy1, Mohamed A. Zohdy1, Darrell Schmidt2 and Adam Timmons3, 1Department of Electrical and Computer Engineering, Oakland University, Rochester, USA, 2Department of Mathematics and Statistics, Oakland University, Rochester, USA and 3Department of Mechanical Engineering, McMaster University, Hamilton, Canada

ABSTRACT

Antenna module is a vital component of automated driving systems, specific intended function and needed in dGPS, HD maps and map correction services, radio and navigation systems. The proposed antenna model for GPS only patch operating at 1.57542 GHz and the automotive GNSS patch antenna resonating at 1.5925 GHz is developed. This work presents the design, modelling, determining passive gain of the GPS patch vs. GNSS antenna with intended targeted applications within the automotive. Simulation are undertaken to evaluate the performance of the proposed GNSS antenna. Simulation studies are all conducted in FEKO rather than mathematical modelling. The two antennas are compared from the size standpoint. The goal of this research paper is to test, measure and evaluate the GPS against GNSS antenna. The main emphasis of this paper is how to obtain the same and/or equivalent amount of total passive gain in a GPS vs. GNSS antenna.

KEYWORDS

Differential Global Position System (dGPS), Global Navigation Satellite System (GNSS), Globalnaya Navigazionnaya Sputnikovaya Sistema (GLONASS), Advanced Driver Assistance Systems (ADAS), Automated Driving (AD).


The Deep Network Model Building and Optimization based on Genetic Algorithm Based on High Dimensional Features

Pin Liu, School of Information Engineering, China University of Geosciences, Bei Jing, China

ABSTRACT

Bohai Bay has a semi-closed geographical feature. The monitoring of water quality in Bohai Bay is of great significance to the development of oceans and the construction of marine ecological civilization in China. Among them, the chlorophyll-a concentration model is the most representative of water quality detection. However, currently models are not well adapted to the special conditions in Bohai Bay. This study uses the 1-4 band radiation values of remote sensing images and the corresponding measured data of chlorophyll a concentration. First, the basic model is constructed using BP neural network; secondly, second-order factors are added to the input data; finally, the genetic algorithm is used to optimize the neural network model. The GA-BP model for chlorophyll a concentration in the Bohai Bay was achieved. Experimental results show that the optimization of this model can effectively reduce the output error and have a better fitting effect.

KEYWORDS

Bohai bay, Chlorophyll a, The BP neural network, Genetic algorithm (GA).


Fatigue Detection in EEG Signals Using Entropies

Muhammad Azam, Derek Jacoby, and Yvonne Coady, Department of Computer Science, University of Victoria, Victoria, Canada

ABSTRACT

Electroencephalogram (EEG) records electrical activity at di?erent locations in the brain. It is used to identify abnormalities and support the diagnoses of di?erent disease condi- tions. The accessibility of low-cost EEG devices has seen the analysis of this data become more common in other research domains. In this work, we assess the performance of using Approximate entropy, Sample entropy, and Reyni entropy as features in the classi?cation of fatigue from EEG data captured by a MUSE 2 headset. We test 5 classi?ers: Naive Bayes, Radial Basis Function Network, Support Vector Machine, K-Nearest Neighbor, and Best First Decision Tree. We achieved the highest accuracy of 77.5% using the Support Vector Machine classi?er and present possible enhancements to improve this.

KEYWORDS

EEG, Electroencephalogram, Approximate entropy, Sample entropy, Reyni entropy, Fatigue detection, Automatic classi?cation, MUSE 2


Follow Then Forage Exploration: Improving A3c

James B. Holliday and T.H. Ngan Le, Department of Computer Science & Computer Engineering, University of Arkansas, Fayetteville, Arkansas, USA

ABSTRACT

Combining both value-iteration and policy-gradient, Asynchronous Advantage Actor Critic (A3C) by Google’s Deep Mind has successfully optimized deep neural network controllers on multi agents. In this work we propose a novel exploration strategy we call “Follow then Forage Exploration” (FFE) which aims to more effectively train A3C. Different from the original A3C where agents only use entropy as a means of improving exploration, our proposed FFE allows agents to break away from A3C's normal action selection which we call "following" and "forage" which means to explore randomly. The central idea supporting FFE is that forcing random exploration at the right time during a training episode can lead to improved training performance. To compare the performance of our proposed FFE, we used A3C implemented by Open AI’s Universe-Starter-Agent as baseline. The experimental results have shown that FFE is able to converge faster.

KEYWORDS

Reinforcement Learning, Multi Agents, Exploration


Public Authorities as Defendants: Using Bayesian Networks to determine the Likelihood of Success for Negligence claims in the wake of Oakden

Scott McLachlan1,2, Evangelia Kyrimi1, Norman E Fenton1, 1Risk and Information Management (RIM), Queen Mary University of London, UK and 2Health informatics and Knowledge Engineering Research (HiKER) Group

ABSTRACT

Several countries are currently investigating issues of neglect, poor quality care and abuse in the aged care sector. In most cases it is the State who license and monitor aged care providers, which frequently introduces a serious conflict of interest because the State also operate many of the facilities where our most vulnerable peoples are cared for. Where issues are raised with the standard of care being provided, the State are seen by many as a deep-pockets defendant and become the target of high-value lawsuits. This paper draws on cases and circumstances from one jurisdiction based on the English legal tradition, Australia, and proposes a Bayesian solution capable of determining probability for success for citizen plaintiffs who bring negligence claims against a public authority defendant. Use of a Bayesian network trained on case audit data shows that even when the plaintiff case meets all requirements for a successful negligence litigation, success is not often assured. Only in around one-fifth of these cases does the plaintiff succeed against a public authority as defendant.


A Knowledge Desire Intention (KDI) Framework for Automated Decision Making: The Justified True Belief Theory (JTB) Approach

Ileladewa, Adeoye Abiodun1, Cheng Wai Khuen2, 1Department of Computer Science, School of Applied Sciences, Federal Polytechnic Ede, Osun State, Nigeria and 2Department of Computer Science, Faculty of Information and Communication Technology (FICT), University Tunku Abdul Rahman (UTAR), Kampar, Malaysia

ABSTRACT

Decision making is inevitable in human life activities, as almost everything human does daily involves choice decisions from various available options before her. Having the right information that leads us to taking good decisions in life is essential, as whether a man becomes successful in life depends on the quality of information available and the kind of decision taken at the right time. Various Decision Support Systems or Recommender Systems have been in use in the society. How effective these systems have been in assisting us in making satisfactory real time user’s goal-oriented decisions has been a thing of serious concern. Moreover, at the group level also, good decisions are nonnegotiable as every community now aspires to be a smart city, to making residents’ lives easier, healthier and more productive. There are various existing technologies to tracking down the preferences/decisions of a user. One of the most popular human reasoning/decision making models is the Belief-Desire-Intention (BDI) Model, around which most Social Network agents are also built, but it is not without its shortcomings too. However, the identified associated problems like data redundancy, junks, etc, emanated from lack of good or adequate data filtering mechanisms, characterizing most well known BDI-related techniques, which hereby affects the output, against having expected outcome that meets user’s needs. Hence, the quest for having good user’s desire-oriented decision making framework with good/dynamic data-filtering mechanism. This paper therefore proposes a personalized Knowledge-Desire-Intention (KDI) framework, which adopts the Justified True Belief (JTB) Theory of Knowledge. The framework was evaluated with online users’ behaviours on UniCAT, and also on Dabao App, over a period of one year, considering 206 active users. The standard ROC metrics for Decision Support Systems was used for benchmarking. Our experimental result shows that the proposed personalized KDI outperformed others considered in this research work – the Coherence and Two-Way Techniques.

KEYWORDS

Smart City, Belief-Desire-Intention (BDI) model, KDI Framework, Justified True Belief Theory


Automated Classification of Banana Leaf Diseases Using an Optimized Capsule Network Model

Bolanle F. Oladejo and Oladejo Olajide Ademola, Department of Computer Science, University of Ibadan, Ibadan, Nigeria

ABSTRACT

Plant disease detection and classification have undergone successful researches using Convolutional Neural Network (CNN); however, due to the intrinsic inability of max pooling layer in CNN, it fails to capture the pose, view and orientation of images. It also requires a large volume of training data and fails to learn the spatial relationship of the features in an object. Thus, Capsule Network (CapsNet) is a novel deep learning model proposed to overcome the shortcomings of CNN. We developed an optimized Capsule Network model for classification problem using banana leaf diseases as a case study. The two dataset classes include Bacterial Wilt and Black Sigatoka, with healthy leaves. The developed model adequately classified the banana bacterial wilt, black sigatoka and healthy leaves with a test accuracy of 95%. Its performance outperformed implemented three variants of CNN architectures (a trained CNN model from scratch, LeNet5 and ResNet50) with respect to rotation invariance.

KEYWORDS

Capsule Network, CNN, Activation function, Deep Learning, Precision Agriculture


Analysis of Embedded Designs in Mechatronic Systems

Kwofie Arnold, Department of Computer Science, University for Development Studies, Tamale, Ghana

ABSTRACT

As technology rises in almost every field, mechatronic systems are of no exception as embedded systems are incorporated as part of their design. This introduces mechatronic system to high computer intelligence manipulations and high task performance. However, the story is not always the same as engineers are unable to implement efficient embedded designs for mechatronics systems. Embedded mechatronic systems rely on many factors. Some include vibration, electrical and electromagnetic, mechanical and the intelligence of the software component. In addition, reducing the associated cost, size, and complexities for process innovation becomes highly significant. The project analysis efficient design techniques for embedded systems which includes design robustness, intelligent embedded system software, power consumption and memory optimization. Embedded systems have become ubiquitous and as a result optimization of the design and performance of programs that run on these systems have continued to remain as significant challenges to the computer systems research community.

KEYWORDS

Mechatronics, Embedded systems, Algorithm, Circuit.


An Adaptive Utilization of Convolutional Matrix Methods on Neuron Cell Segmentation with an Application Interface to Aid the Understanding of how Memory Recall Works

Neeraj Rattehalli1, Ishan Jain2, 1Computer Science, Menlo-Atherton High School, Atherton, California and 2Computer Science, Mission San Jose High School, Fremont, California

ABSTRACT

Current methods of image analysis and segmentation on hippocampal neuron bodies contain excess and unwanted information like unnecessary noise. In order to clearly analyze each neural stain like DAPI, Cy5, TRITC, FITC and start the segmentation process, it is pertinent to preemptively denoise the data and create masked regions that accurately capture the ROI in these hippocampal regions. Unlike traditional edge detection algorithms like the Canny methods [1] available in OpenCv libraries [2], we employed a more targeted approach based on pixel color intensities. Using the R, G, and B value thresholds, our algorithm checks if a cell is a boundary point by doing neighboring pixel level comparisons. Combined with a seamless GUI interface for cropping the highlighted ROI, the algorithms efficiently work at creating general outlines of neuron bodies. With user modularity from the various thresholding values, the outlining and denoising presents clean data ready for analysis with object detection algorithms like FRCNN and YOLOv3 [3].

KEYWORDS

Convolutional Matrix, Computer Vision, Machine Learning, Deep Learning, Automation Interface.


Facial Expression Recognition using Combined Pre-trained Convolutional Models

Raid Saabni1, 2 and Alon Schclar1, 1School of Computer Science, The Academic College of Tel-Aviv Yaffo, Tel-Aviv, Israel, 2Traiangle R&D Center, Kafr Qarea, Israel

ABSTRACT

Automatic Facial Expression recognition (AFER), has been an active research area in the past three decades. Research and development at this area has become continually active due to its wide range of potential applications in many fields. Some of such fields, are Human-Computer Interface, Human Emotion Analysis, Image Retrieval, User Profiling, Medical Care and Cure, Video Games, Neuro Marketing, and many more. People can vary significantly in the way they show their expressions for even the same person and expression, which makes AFER a more challenging problem. Images also can vary in brightness, background and pose, and these variations are emphasized when considering different persons with variations in shape, ethnicity and other factors. Recent research in the field presents impressive results when using Convolution Neural Network (CNN's, ConvNets). In general, ConvNets proved to be a very common and promising choice for many computer vision tasks including AFER. Motivated by this fact, we adopt slightly modified versions of three known architectures which are based on ConvNets to generate an Automated Facial Expression Recognition system. We have trained these models and combined them in parallel, followed by four additional layers to one large ConvNet. The new system we present, is trained to detect universal facial expressions of seven basic emotions: Neutral, Happy, Angry, Disgust, Sad, Fear and Surprise when targeting the FER2013 benchmark in addition to the Contempt emotion when using the FER2013+ benchmark. The presented approach improves the results of the used architectures by 4% and 2% using the FER2013 and FER2013+ data sets, respectively. Enabling second round of training for the pre-trained models, increase the accuracy of some of the basic models by close to 3% while improving the accuracy of the whole system.

KEYWORDS

Automatic Facial Expression Recognition, Convolutional Neural Networks, Machine Learning, Boosting.


A Dynamic Approach for Managing Heterogeneous Wireless Networks in Smart Environments

Yara Mahfood Haddad and Hesham Ali, Department of Computer Science, University of Nebraska at Omaha, Omaha, NE 68182, USA

ABSTRACT

A Wireless Sensor Network (WSN) is a collection of sensors connected through a wireless infrastructure. Recently, WSNs have been evolving rapidly and likely to consist of heterogeneous sensors embedded in many objects performing various types of tasks. The emergence of the Internet of Things (IoT) represents a typical example of evolved WSNs, in which WSNs include sensors embedded in a variety of ‘things’ in a variety of environments. In this work, we propose a new approach for managing heterogeneous WSNs designed to accommodate variabilities associated with different environments. The proposed approach is implemented using genetic algorithms to achieve the flexibility needed to optimize different types of objective functions such as quality of coverage, redundancy and energy-awareness. We report the results of employing the proposed approach under different scenarios with different sets of assumptions and priorities for typical application domains. For assessment purposes, we compare our algorithm with two greedy algorithms used to manage WSNs in different applications. The proposed algorithm performs better than other methods and exhibits the ability to adjust to the different needs of each scenario.

KEYWORDS

Wireless Sensor Networks (WSNs), IoT, Genetic Algorithm, Homogeneous, Heterogeneous


An Enhanced Lucene Based System for Efficient Document/Information Retrieval

Alaidine Ben Ayed, Ismaïl Biskri and Jean-Guy Meunier, Université du Québec à Montréal (UQAM), Canada

ABSTRACT

In this paper we implement a document retrieval system using the Lucene tool and we conduct some experiments in order to compare the efficiency of two different weighting schema: the well-known TF-IDF and the BM25. Then, we expand queries using a comparable corpus (wikipedia) and word embeddings. Obtained results show that the latter method (word embeddings) is a good way to achieve higher precision rates and retrieve more accurate documents.

KEYWORDS

Internet and Web Applications, Data and knowledge Representation, Document Retrieval.


Wi-Fi Indoor localization using fingerprinting and Lateration

EL ABKARI Safae, EL MHAMDI Jamal, Department of Electrical Engineering, Ecole Normale Supérieure de l’Enseignement Technique, Mohamed V University, Rabat, Morocco

ABSTRACT

Indoor positioning has come under the spotlight as one of the upcoming applications due to its use in a variety of services. However, Wi-Fi based localization in indoor environment offers significant advantages utilizing current wireless infrastructures and good performances with low cost. The objective of this research is to provide a compromise between feasibility and accuracy for practical applications. We use filter to minimize Wi-Fi received signal strength (RSS) fluctuations and we combine two Wi-Fi approaches to locate a mobile user. Commonly known as fingerprinting, we exploit at first this technique that uses matching pre-recorded received signal strength (RSS) from nearby access point (AP) to the location data transmitted from the user in real time. The second technique is trilateration which is a distance-based approach using three known access points to determine positions. The combination of the two methods provides accuracy enhancement and wide indoor locating coverage.

KEYWORDS

Indoor positioning, Wi-Fi, fingerprinting, trilateration, Received Signal Strength.


Wireless Sensor Networks Simulators and Testbeds

Souhila Silmi1,2, Zouina Doukha1, Rebiha Kemcha2,3, Samira Moussaoui1, 1Department of Computer Science, USTHB University, RIMAA Lab., B P N°32 El Alia, 16000 Bab Ezzouar, Algiers, Algeria, 2Department of Computer Science, Normal Scool Superior, B P N°92 16308 Vieux-Kouba, Algiers, Algeria and 3Department of Computer Science, University of Boumerdes, LIMOSE Laboratory, Boumerdes, Algeria

ABSTRACT

Wireless sensor networks (WSNs) have emerged as one of the most promising technologies for the current era. They have been studied for years, but there is still remaining challenges for researchers since open opportunities to integrate new technologies are added to this field. One challenging task is WSN deployment. Yet, this is done by real deployment with testbeds platforms or by simulation tools when real deployment could be costly and time consuming. In this paper, we review the implementation and evaluation process in WSNs. We then describe relevant testbeds and simulation tools, and their features. Lastly, we conduct an experimentation study using these testbeds and simulations to highlight their pro and cons. As a use case, we implement a localization protocol. This paper opens the door for future work in achieving better implementations, in terms of reliability, accuracy and consumed time.

KEYWORDS

Wireless Sensor Network, Testbeds, Simulation Tools, Localization Protocol.


Multiple Layers of Fuzzy Logic to Quantify Vulnerabilities in IoT

Mohammad Shojaeshafiei1, Letha Etzkorn1 and Michael Anderson2, 1Department of Computer Science, The University of Alabama in Huntsville, Huntsville, USA and 2Department of Civil and Environmental Engineering, The University of Alabama in Huntsville, Huntsville, USA

ABSTRACT

Quantifying vulnerabilities of network systems has been a highly controversial issue in the fields of network security and IoT. Much research has been conducted on this purpose; however, these have many ambiguities and uncertainties. In this paper, we investigate the quantification of vulnerability in the Department of Transportation (DOT) as our proof of concept. We initiate the analysis of security requirements, using Security Quality Requirements Engineering (SQUARE) for security requirements elicitation. Then we apply published security standards such as NIST SP-800 and ISO 27001 to map our security factors and subfactors. Finally, we propose our Multi-layered Fuzzy Logic (MFL) approach based on Goal question Metrics (GQM) to quantify network security and IoT (Mobile Devices) vulnerability in DOT.

KEYWORDS

Computer Network, Network Security, Mobile Devices, Fuzzy Logic, Vulnerability, Cyber Security.


A Confirmed Security Framework for Virtual Machine Image

Raid Khalid Hussein1, Hany F. Atlam1,2, Ahmed Alenezi1,3 and Vladimiro Sassone1, 1Department of Electronics and Computer Science, University of Southampton, Southampton, UK., 2Dept. of Computer Science, Faculty of Computing and Information Technology, Northern Border University, 1321, Saudi Arabia. and 3Computer Science and Engineering Dept., Faculty of Electronic Engineering, Menoufia University, 32952, Egypt

ABSTRACT

The notion of cloud computing has emerged due to academic research in various fields, e.g. web services, utility computing, virtualisation, and distributed computing. Virtualisation is one of the most vital concepts in terms of the creation of cloud computing itself. Although virtualisation brings about risks related to security, said risks may well not be linked to the cloud. The primary disadvantage when it comes to employing cloud computing pertains to security. Within environments where the cloud is used, a substantial amount of importance is attached to ensuring that the virtual machine image is not unsafe nor unsecure. A recently conducted study proposed a security framework which can be employed to safeguard the virtual machine image when cloud computing is being undertaken. The abovementioned framework comprises factors which illustrate the security needs in relation to safeguarding the virtual machine image so that it is not compromised by security threats. The factors from the framework which was put forth were formulated after consulting the relevant literature as well as industry standards. Verification of the security framework was achieved through distributing questionnaires (to practitioners), and by means of conducting interviews with experts; in this way the security requirements were able to offer security protection for the virtual machine image.

KEYWORDS

Cloud Security, Security Requirements, Virtualization and Virtual Machine Image.


A New Intelligent Power Factor Corrector for Converter Applications

Hussain Attia, Department of Electrical, Electronics and Communications Engineering, School of Engineering, American University of Ras al khaimah, Ras Al Khaimah, UAE

ABSTRACT

This paper presents a new design of a unity power factor corrector for DC-DC converter applications based on an Artificial Neural Network algorithm. The controller firstly calculates the system power factor by measuring the phase shift between the grid voltage and drawn current. Secondly, the controller receives the absolute value of the grid voltage and the measured phase shift through the designed ANN, which predicts the duty cycle of the pulse width modulation (PWM) drive pulses, these PWM pulses used to drive the Boost DC-DC converter to enforce the drawn current to be fully in phase with the grid voltage as well as to improve the level of Total Harmonics Distortion (THD) of the drawn current. MATLAB/Simulink software is adopted to simulate the presented design. The analysis of the simulation results indicates the high performance of the proposed controller in terms of power factor correcting, and drawn current THD improving.

KEYWORDS

Power factor corrector, artificial neural network, Boost DC-DC converter, Total Harmonic Distortion, MATLAB/Simulink.


A Flow Simulation in the Foaming Process

Karel Frana1, Jörg Stiller2 and Iva Nová1, 1Technical University of Liberec, Studentska 2, 461 17 Liberec, Czech Republic and 2Technische Universität Dresden, Institut für Strömungsmechanik01062, Dresden, Germany

ABSTRACT

This paper deals with unsteady three-dimensional numerical calculations of the two-phase flow problem represents a gas bubble formation in the liquid in the container with the specific size. For calculations, the Volume of Fluids approach is adopted to resolve the shape of bubbles and their dynamics. The liquid phase is a mixture of water-ethanol and the gas phase is considered as air. The problem is treated as isothermal. The study is still limited to the lower flow rates at which bubbles are created and rising separately without any interaction, merging etc. However, this particular problem still required finer mesh especially in the domain in which bubbles are formed. The current results showed that air bubbles have a form of the ellipsoid and after they reach the liquid surface, they are moving towards to the side walls along this liquid level. This fact is interesting from the view of the foaming process and for the other investigation of the bubble behaviour at this phase interface. The flow study is calculated parallel using compressible multi-phase flow solver.

KEYWORDS

Multi-phase flow simulations, Computational mesh, Parallel calculations & Flow visualisation


Nonlinear Aircraft Systems Identification using Parametric Approaches and Surrogate Models

Benyamin Ebrahimi and Fahimeh Barzamini, Research Center of Space Systems Design Institute (SSDI), Faculty of Aerospace Engineering, K. N. Toosi University of technology Tehran, Iran

ABSTRACT

In this paper, a nonlinear aircraft system is identified using parametric approaches and surrogate models. For this, the simulation of the selected aircraft, here the Boeing 747, is performed and then by applying appropriate inputs and stimulating the system modes the outputs are extracted. To estimate system parameters, aircraft dynamics simulation and then implementation of detection methods are per- formed. The main purpose of the aircraft system identification is to estimate the aerodynamic force and torque coefficients using parametric identification approaches and surrogate models. For the parametric approaches, the least square error and the recursive least square methods are applied and for the surrogate models, the artificial neural networks and adaptive neural fuzzy inference system are used. In order to evaluate the accuracy of identification, the estimated aerodynamic force and torque coefficients are com- pared with the results from the simulation. It is shown that the graphs are almost matched that Indicates a low value of outputs difference (error value). The results from parametric approaches and surrogate models indicate appropriate accuracy in identifying the aerodynamic coefficients of the nonlinear aircraft system.

KEYWORDS

System Identification, 6DOF Flight Parameters Estimation, Least Squares Error, Neural Networks, ANFIS.


A Stacked Ensemble Approach to the Security of Information System

OLASEHINDE Olayemi1, Alese B. K. 2 and OLAYEMI OLufunke C. 3, 1Department of Computer Science Federal Polytechnic, Ile Oluji, Ondo State, Nigeria, 2Department of Cyber Security, Federal University of Technology, Akure, Ondo State, Nigeria and 3Department of Computer Science, Joesph Ayo Babalola University, Ikeji, Osun State, Nigeria

ABSTRACT

Intrusion detection plays important role in the protection of information system against the growing activities of cyber attackers that seeks to compromise the availability, confidentiality and integrity of information system,Intrusion detection system (IDS) analyses network traffics to detect and alert any attempt to compromise computer systems and its resources, stacked ensemble build synergy among two or more IDS in order to improve and obtain amore accurate and improved intrusion detection accuracy, This research focuses on the application of Stacked Ensemble Approach to Intrusion Detection Systems(IDS). Three (3) filter-based feature selection methods comprising consistency-based, Information Gain-based and correlation-based methods were used to identifyrelevant features of the network traffic that identifies it as either a normal / attacks or attacks categories. Three (3) Supervised based-level machine learning algorithms comprising K Nearest Neighbor , Naïve Bayes’ and Decision Tree algorithms were used to build the Base-predictive models with all the features and reduced selected features. Information Gain-based method identifies the strongest and most efficient features for the network traffic and Decision Tree models gives the highest classification accuracy on evaluation with the testing dataset, the predictions of the base-level models were used to train the three (3) meta-level learning algorithms, namely; Meta Decision Tree (MDT), Multi Response Linear Regression (MLR) and Multiple Model Trees (MMT) to build the Stacked Ensemble models. The ensemble systems were implemented Python programming language, it was evaluated on Core i5, 6GB RAM and 500GB HDD laptop computer. The stacked ensemble records accuracy improvement of 3.0% and 5.11% over the best and least predictions of the base-level models respectively and a reduction of 0.89% and 3.29% over base-model least and highest false alarm rate respectively . The evaluation of this work shows a great improvement over reviewed work in literature.

KEYWORDS

Information Security, Intrusion, Detection accuracy, Base models, Ensemble, Detection Improvement.


Evaluating and Validating Cluster Results

Anupriya Vysala and Joseph Gomes, Department of Computer Science, Bowie State University, USA

ABSTRACT

Clustering is the technique to partition data according to their characteristics. Data that are similar in nature belong to same cluster [4]. There are two type of evaluation methods to evaluate clustering quality. One is external evaluation where the truth labels in the data sets are known in advance and the other is internal evaluation in which the evaluation is done with data set itself without true labels. In this paper, both external evaluation and internal evaluation are performed on the cluster results of IRIS dataset. In case of external evaluation Homogeneity, Correctness and V-measure scores are calculated for the dataset. For internal performance measure, Silhouette Index and Sum of Square Errors are used. These internal performance measures along with the dendrogram (graphical tool from hierarchical Clustering) are used first to validate the number of clusters. Finally, as a statistical tool, we used the frequency distribution method to compare and provide a visual representation of the distribution of observations within a cluster result and the original data.

KEYWORDS

Hierarchical Agglomerative Clustering, K-means Clustering, Internal Evaluation, External Evaluation, Silhouette.


Emotion Dynamics

Sherry Yuan, Lionel Liang, Peter Potaptchik and Megan Boler, Computer Science, University of Toronto, Toronto, On, Canada

ABSTRACT

A major aim of this project is to integrate nuanced theoretical analysis of emotion with quantitative measurements of the latter so as to develop a multifaceted and interdisciplinary understanding of the role of emotion in politics, specifically around the time of the 2019 Canadian federal election. Elsewhere in this report we have discussed the concept of deep stories. Here we hope to take a more micro-level approach, which looks at the ways sentiments appear at the Tweet-level. So far this has involved using sentiment analysis on Twitter threads, where we have been interested in measuring the strength of sentiments (positive and negative) as conversations develop in the replies to popular tweets.

KEYWORDS

Sentiment Analysis, Canadian Politics, Emotion, Election, Deep Stories, Vader


Graph Edit Distance as A Metric to Evaluate the Semantic Similarity Between Two Terms

Claudia Rosas Raya, Hugo Jimenez-Hernandez and Ana Marcela Herrera-Navarro, Autonomous University of Queretaro, Queretaro, Mexico

ABSTRACT

In the field of Natural Language Processing, Semantic Evaluation is still very controversial. There is not a standard procedure to assert how much a word is close to another in terms of semantics. Ontologies are abstract tools in knowledge-based systems that provide structure allowing interoperability between systems by sharing information. A particular application of an ontology is the use of hypernyms, since they contain broad meanings that the words being analyzed. This work proposes a method based on hypernyms and Graph Edit Distance (GED), in an attempt to find a close similar node in the WordNet Hierarchy able to encapsulate the meanings of two different words and thus being able to define what is the cost of reaching said similarity. The GED is used to weight and build the similarity space to match two concepts in the ontology. The proposal is tested in scientific documents to look for those terms that are clos similar (in meaning) to others and define the semantic relationships among them. The results show the proposal is useful in close scenarios as scientific documents, to mention one, providing a semantic tool for search and query documents by the meaning instead of simple match searching.

KEYWORDS

Conceptual graphs, graph edit distance, text similarity, text similarity measures, semantic evaluation.


A Framework for Capturing and Analyzing Unstructured and Semi Structured Data for A Knowledge Management System

Gerald Onwujekwe1 Kweku-Muata Osei-Bryson1 Nnatubemugo Ngwum2, 1Department of Information Systems, Virginia Commonwealth University, Richmond, VA, USA and 2Department of Computer and Information Sciences, Towson University, Maryland, USA

ABSTRACT

Mainstream knowledge management researchers generally agree that knowledge extracted from unstructured data and semi-structured data has become imperative for organizational strategic decision making. In this research, we develop a framework that captures and analyses unstructured data using machine learning techniques and integrates knowledge and insight gained from the data into traditional knowledge management systems. Unlike most frameworks published in the literature that focuses on a specific type of unstructured data, our frameworks cut across the varieties of unstructured data ranging from textual data from social network sites, online forums, discussion boards, reviews to audio data, image data and video data. We highlight some preprocessing and processing techniques for these data and also highlight some standard output. We evaluate the framework by developing a textual data application programming interface (API) using python and beautiful soup and we perform sentiment analysis on the students’ review data collected through the API.

KEYWORDS

Unstructured data, knowledge management system, framework, sentiment analysis.


Data-driven Techniques for Music Genre Recognition

Santiago Rentería and Jesus Leopoldo Llano, Tecnológico de Monterrey, Mexico

ABSTRACT

After the digital revolution, it is not strange to see data science taking interest in music. The sheer amount of available content opens a plethora of possibilities for studying music and its social impact from a data analytic perspective. This paper studies the relationship that exists between, song features and their corresponding genre, to provide data-mining tools for music recommendation and sub-genre identification. For the first task, we compared different classification models, including Random Forests, Fully-connected neural networks and Logistic Regression. For the latter, we carried out cluster analysis and dimensionality reduction for data visualisation. Overall, Random Forest models had better performance in genre classification than Fully-connected networks, but they suffered from overfitting. Moreover, the highest accuracy obtained was too low (64\%) to be of use for genre recognition applications. Nevertheless, we think our results show the limitations of hand-crafted features and point towards more sophisticated deep learning techniques.

KEYWORDS

Music Information Retrieval, Data Mining, Automated Music Recommendation, Classification


Data Analytics to Predict Employee Attrition

Evelyn Zuvirie, Francisco J. Cantú-Ortiz and Héctor G. Ceballos, School of Engineering and Science, Tecnológico de Monterrey, Monterrey, NL, México

ABSTRACT

Employee attrition is one of the important problems that organizations have because this problem could negatively affect the processes of the company. A proposed solution is the creation of models to predict employee attrition. In this context, data analytics is a process capable of extracting knowledge to solve problems and support data-driven decision making. We apply CRISP-DM to implement the data mining process to solve the employee attrition problem to predict and create strategies to prevent it. The dataset used for the experiments is IBM HR Analytics Employee Attrition \& Performance dataset; some attributes are significantly correlated to employee attrition. We proposed a Multiple Linear Regression model to predict monthly income; in the case of classification, we propose Bagging and SVM models to predict the attrition. We found that the proposed models have high performance.

KEYWORDS

Employee Attrition, Data Analytics, Data Mining, CRISP-DM


Progressive Approach of Organisation Knowledge Management

Makhala Mpho Motebele1 and Okuthe P. Kogeda2, 1Department of Quality and Operations Management, Faculty of Engineering and Built Environment, University of Johannesburg, South Africa, 2MDepartment of Computer Science & Informatics, Faculty of Natural and Agricultural Sciences, University of the Free State, Bloemfontein, South Africa

ABSTRACT

There has been evolution in organisations and more so how knowledge can be managed for the benefit of the organisation. Knowledge is always a valuable asset that influences how an organisation deals with challenges and opportunities for its success. In this paper, a progressive strategic approach is proposed. This ensures that knowledge value chain is harnessed to manage change as a process. Evenwicht Samenhang and Heterogeneity (ESH) framework is used in this work. The proposed approach would ensure knowledge management in an organisation is progressive and evolves with changes taking place in the organisation.

KEYWORDS

Progressive approach, knowledge management, change process, ESH framework.


A Systematic Review on Natural Language Processing For Knowledge Management In Healthcare

1Ganga Prasad Basyal, and 2Bhaskar P Rimal, 1Department of Business and Information Systems, Dakota State University, SD, USA, 2The Beacom College of Computer and Cyber Science, Dakota State University, SD, USA

ABSTRACT

Driven by the visions of Data Science, recent years have seen a paradigm shift in Natural Language Processing (NLP). NLP has set the milestone in text processing and proved to be the preferred choice for researchers in the healthcare domain. The objective of this paper is to identify the potential of NLP, especially, how NLP is used to support the knowledge management process in the healthcare domain, making data a critical and trusted component in improving the health outcomes. This paper provides a comprehensive survey of the stateof-the-art NLP research with a particular focus on how knowledge is created, captured, shared, and applied in the healthcare domain. Our findings suggest, first, the techniques of NLP those supporting knowledge management extraction and knowledge capture processes in healthcare. Second, we propose a conceptual model for the knowledge extraction process through NLP. Finally, we discuss a set of issues, challenges, and future research.

KEYWORDS

Knowledge Management, Natural Language Processing, Knowledge Extraction, Knowledge Capture.


Prediction of Cancer Microarray and DNA Methylation Data using Non-negative Matrix Factorization

Parth Patel1, Kalpdrum Passi1, Chakresh Kumar Jain2, 1Department of Mathematics and Computer Science, Laurentian University, Sudbury, Ontario, Canada, 2Department of Biotechnology, Jaypee Institute of Information Technology, Noida, India

ABSTRACT

Over the past few years, there has been a considerable spread of microarray technology in many biological patterns, particularly in those pertaining to cancer diseases like leukemia, prostate, colon cancer, etc. The primary bottleneck that one experiences in the proper understanding of such datasets lies in their dimensionality, and thus for an efficient and effective means of studying the same, a reduction in their dimension to a large extent is deemed necessary. This study is a bid to suggesting different algorithms and approaches for the reduction of dimensionality of such microarray datasets. This study exploits the matrix-like structure of such microarray data and uses a popular technique called Non-Negative Matrix Factorization (NMF) to reduce the dimensionality, primarily in the field of biological data. Classification accuracies are then compared for these algorithms. This technique gives an accuracy of 98%.

KEYWORDS

Microarray datasets, Feature Extraction, Feature Selection, Principal Component Analysis, Non-negative Matrix Factorization, Machine learning.


What Comes After Shift-left? Shift-open with A Subsystem Firmware Design Framework

Rob Grant, AMD Inc, Markham, Ontario, Canada

ABSTRACT

This paper describes a development framework for pre-silicon verification of firmware within an SoC subsystem. The development framework integrates subsystem firmware into the RTL verification infrastructure of the subsystem to allow pre-silicon RTL/firmware co-simulations. The framework has been used to “shift-left” firmware development of subsystems developed in-house. This paper proposes that these techniques could also be applied to 3rd-party subsystem firmware to verify the proper integration of the subsystem. The ability to verify the integration of both RTL and firmware from a subsystem provider enables a “shift open” in SoC design, allowing an increase in both the use and complexity of 3rd-party SoC subsystems containing firmware.

KEYWORDS

firmware co-simulation, shift-left, SoC verification


DocPro: A Framework for Building Document Processing Systems

Ming-Jen Huang, Chun-Fang Huang, Chiching Wei, Foxit Software Inc., Albrae Street, Fremont, USA

ABSTRACT

With the recent advance of the deep neural network, we observe new applications of natural language processing (NLP) and computer vision (CV)technologies. Especaully, when applying them to document processing, NLP and CV tasks are usually treated individually in research work and open source libraries. However, designing a real-world document processing system needs to weave NLP and CV tasks and their generated information together. There is a need to have a unified approach for processing documentscontaining textual and graphical elements with rich formats, diverse layout arrangement, and distinct semantics.This paper introduces a framework tofulfillthis need. The framework includes a representation model definition for holding the generated information and specifications defining the coordination between the NLP and CVtasks.

KEYWORDS

Document Processing, Framework, Formal definition, Machine Learning


Object Detection in Traffic Scenarios - A Comparison of Traditional and Deep Learning Approaches

Gopi Krishna Erabati, Nuno Gonçalves and Hélder Araújo, Institute of Systems and Robotics, University of Coimbra, Portugal

ABSTRACT

In the area of computer vision, research on object detection algorithms has grown rapidly as it is the fundamental step for automation, specifically for self-driving vehicles. This work presents a comparison of traditional and deep learning approaches for the task of object detection in traffic scenarios. The handcrafted feature descriptor like Histogram of oriented Gradients (HOG) with a linear Support Vector Machine (SVM) classifier is compared with deep learning approaches like Single Shot Detector (SSD) and You Only Look Once (YOLO), in terms of mean Average Precision (mAP) and processing speed. SSD algorithm is implemented with different backbone architectures like VGG16, MobileNetV2 and ResNeXt50, similarly YOLO algorithm with MobileNetV1 and ResNet50, to compare the performance of the approaches. The training and inference is performed on PASCAL VOC 2007 and 2012 training, and PASCAL VOC 2007 test data respectively. We consider five classes relevant for traffic scenarios, namely, bicycle, bus, car, motorbike and person for the calculation of mAP. Both qualitative and quantitative results are presented for comparison. For the task of object detection, the deep learning approaches outperform the traditional approach both in accuracy and speed. This is achieved at the cost of requiring large amount of data, high computation power and time to train a deep learning approach.

KEYWORDS

Object Detection, Deep Learning, SVM, SSD & YOLO


Extracting Sentence Vectors from LSTM for Sentence Suggestion

Akhil Ambekar, Cheng Chao Lu and Chih Lai, Graduate Program in Software, University of St Thomas, St Paul, Minnesota, U.S.A

ABSTRACT

In this paper, we propose a sentence suggestion system that, based on an input sentence, will suggest the top k semantically relevant post-sentences. In our system, we first train an LSTM (Long Short-Term Memory) neural network to classify the topics of training sentences. After the training, we extract the internal states of the LSTM as a sentence vector to capture the semantics of each training sentence. Each pair of pre-sentence and post-sentence is stored with its corresponding sentence vectors. When given a new input (pre-)sentence, our system suggests the post-sentences in four steps: (1) Classifying the topic of the input sentence, (2) Extracting the sentence vector of the input sentence from the hidden states of the LSTM, (3) Searching a candidate sentence from our dataset that has its sentence vector closest to the input sentence vector, and (4) Suggesting the top k post-sentences that have their sentence vectors closest to the post-sentence vector paired with the candidate sentence vector. We evaluate both the quality of sentence vectors and the sentences suggested by our system based on their topic accuracies and semantic relevancies with respect to the input sentences.


Hierarchical Transformer for Multilingual Machine Translation

Albina Khusainova1, Adil Khan2, Adín Ramírez Rivera3, and Vitaly Romanov4, 1Innopolis University, Innopolis, Russia, 2Innopolis University, Innopolis, Russia, 3University of Campinas, Campinas, Brazil, 4Innopolis University, Innopolis, Russia

ABSTRACT

The choice of parameter sharing strategy in multilingual machine translation models determines how optimally parameter space is used and hence, directly influences ultimate translation quality. Inspired by linguistic trees that show the degree of relatedness between different languages, we suggest a new general approach to parameter sharing in multilingual machine translation (MT). The main idea is to use these expert language hierarchies as a basis for multilingual architecture: the closer two languages are, the more parameters they share. This intuitive approach finds support in observations on the role of language affinity in recent MT literature. In this work, we test this idea and propose a proof of concept. We consider the simplest case with two source and one target languages and show the potential of this approach. Namely, we demonstrate that the hierarchical architecture outperforms bilingual models and multilingual model with full parameter sharing, and show how translation quality of the hierarchical model depends on linguistic distance between source languages.

KEYWORDS

NMT, Multilingual machine translation, Low-resource languages, Hierarchical model, Transformer


Using Holographically Compressed Embeddings in Question Answering

Salvador E. Barbosa, Department of Computer Science, Middle Tennessee State University, Murfreesboro, TN, USA

ABSTRACT

Word vector representations are central to deep learning natural language processing models. Many forms of these vectors, known as embeddings, exist, including word2vec and GloVe. Embeddings are trained on large corpora, and learn the word’s usage in context, capturing the semantic relationship between words. However, the semantics from such training are at the level of distinct words (known as word types), and can be ambiguous when, for example, a word type can be either a noun or a verb. In question answering, parts-of-speech and named entity types are important, but encoding these attributes in neural models expands the size of the input. This research employs holographic compression of pre- trained embeddings, to represent a token, its part-of-speech, and named entity type, in the same dimension as representing only the token. The implementation, in a modified question answering recurrent deep learning network, shows that semantic relationships are preserved, and yields strong performance.

KEYWORDS

Question Answering, Vector Embeddings, Holographic Reduced Representations, DrQA, SQuAD


A Semantic Question Answering in the Domain of Smart Factories

Orçun Oruç, Technische Universität Dresden, Software Technology Group, Nöthnitzer Str. 46 01187 Dresden

ABSTRACT

Industrial manufacturing has become more interconnected between smart devices such as the industry of things edge devices, tablets, manufacturing equipment, and smartphones. Smart factories have emerged and evolved with digital technologies and data analytics in manufacturing systems over the past few years. Smart factories make complex data enables digital manufactur- ing and smart supply chain management and enhanced assembly line control. Nowadays, smart factories produce a large amount of data that needs to be apprehensible by human operators and experts in decision making. However, linked data is still hard to understand and interpret for human operators, thus we need a translating system from linked data to natural language or summarization of the volume of linked data by eliminating undesirable results in the linked data repository. In this study, we propose a semantic question answering in a restricted smart factory domain attaching to various data sources. In the end, we will perform qualitative and quantitative evaluation of the semantic question answering, as well as discuss ?ndings and conclude the main points regarding our research questions.

KEYWORDS

Semantic Web, Web 3.0, Information Retrieval, Natural Language Processing, Indus- try 4.0.


A Descriptive Study on Emerging Methods for Data Security in Cloud Computing

Srinidhi Kulkarni, Rishabh Kumar Tripathi, Prof. Muktikanta Sahu, Department of Computer Science and Engineering, International Institute of Information Technology, Bhubaneswar, India

ABSTRACT

In this paper we did a literature study of the security algorithms that have been proposed to secure the Cloud computing platforms. The paper presents the potential threats, security issues of cloud computing platforms and the efficient research work carried out on these fields. The cryptography based security algorithms such as RSA, DES, AES, ECC and BLOWFISH have been discussed and the works relating to these algorithms were also studied and their results are presented. Some novel approaches in which Machine Learning frameworks were used to enforce the security of the cloud are also mentioned and discussed in detail. A comparative study of the security algorithms based on their performance on various impact factors of a system is also presented based on the research of the past. The discussion in this paper is a generalized discussion which is applicable to any service and any type of deployment of the cloud computing system. The paper aims to contribute to the domain knowledge of security and the different ways to enhance it.

KEYWORDS

Cloud computing, Security threats and breaches, Cryptography, Security Algorithms, Machine Learning.