International Conference on Big Data & Health Informatics (BDHI 2020)

November 28 ~ 29, 2020, Dubai, UAE

Accepted Papers

Anomaly Detection in Time Series based Deep Learning

Rim Romdhane1, Zeineb Ghrib1 and Rakia Jaziri2, 1Deavoteam Research and Innovation, France, 2Paris 8 University, France


ABSTRACT

Anomaly detection is the process of identifying unexpected items or events in data sets, which differ from the norm. Anomaly detection is an active research field which attracts the attention of many business and research actors. Typically, anomalous data can be connected to some kind of problem or rare event such as bank fraud, medical problems, cloud monitoring or network intrusions detection, etc. Dealing with effective anomaly detection for complex and high-dimensional time series data remains a challenging task. In this work, We propose an approach based on an LSTM Autoencoder trained on normal records to learn efficient normal sequence representations combined with a supervised classifier to detect abnormal data. Experimental results show that the encoding step based on LSTM pretrained encoder alows to get efficient representation of data that we can accurately detect abnormal records. In fact, the encoded representation reduces significantly the correlations between normal and abnormal records and allows us to have an efficient latent data representation that separates consistently the two classes. The proposed approach was compared with state-of-the art approaches [19],[16],[20],[27] and outperform them by reducing significantly the classification error.


KEYWORDS

Anomaly detection, LSTM Autoencoder, Supervised Classification, Latent Representation.



Local Branching Strategy-based Method for the Knapsack Problem with Setup

Samah Boukhari1, Isma Dahmani2 and Mhand Hifi3, 1LaROMaD, USTHB, BP 32 El Alia, 16111 Alger, Algérie, 2AMCD-RO, USTHB, BP 32, El Alia, 16111 Bab Ezzouar, Alger, Algerie, 3EPROAD EA4669, UPJV, 7 rue du Moulin Neuf, 80000 Amiens, France


ABSTRACT

In this paper, we propose to solve the knapsack problem with setups by combining mixed linear relaxation and local branching. The mixed linear relaxation can be viewed as driving problem, where it is solved by using a special black-box solver while the local branching tries to enhance the solutions provided by adding a series of invalid / valid constraints. The performance of the proposed method is evaluated on benchmark instances of the literature and new large-scale instances. Its provided results are compared to those reached by the Cplex solver and the best methods available in the literature. New results have been reached.


KEYWORDS

Knapsack, Setups, Local Branching, Relaxation.



Real Time Image and Video Human detection

Boudaoud Lakhdar El Amine, Computer science Laboratory of Oran (LIO), University of Oran1 Ahmed Ben Bella


ABSTRACT

The problem of human detection is to automatically locate people in an image or video sequence and has been actively researched in the past decade. This paper aims to provide a comprehensive survey on the recent development and challenges of human detection. And it is used on diverse application areas including abnormal event detection, human gait characterization, congestion analysis, person identification, gender classification, fall detection for elderly people, Different from previous surveys, this survey is organised in the thread of human object descriptors.


KEYWORDS

Human detection, background subtraction, tracking, classification, Face detection, Viola Jones, eye detection, Open CV, frontal faces.



Intelligent Case Resolution Advisor: An Enterprise AI Solution for Efficient Customer Service Case Resolution

Anshuma Chandak and Anwitha Paruchuri, Applied Intelligence Labs, Accenture, San Jose, CA


ABSTRACT

As businesses become more global and with rapid development of Information Technology, there is an increased need for automation of customer service management using AI. The Intelligent Case Resolution Advisor reduces case resolution time by routing the issue to the right support engineer & providing high quality solutions. It also helps in training of new engineers at half the cost and time.


KEYWORDS

Customer Service Management, Natural Language Processing, Unsupervised Clustering, Classification, Business Processes



Glaucoma Screening using Simple Fusion Features

Panaree Chaipayom1, Assc Prof.Somying Thainimit2, Dr.Duangrat Gansawat3 and Prof.Hirohiko Kaneko4, 1Department of Electrical Engineering, Kasetsart University, Bangkok, Thailand, 2Department of Electrical Engineering, Kasetsart University, Bangkok, Thailand, 3National Electronics and Computer Technology Center, Pathum Thani, Thailand and 4Department of Information and Communications Engineering, Tokyo Institute of Technology, Yokohama, Japan


ABSTRACT

Glaucoma is the second most common cause of blindness. It is caused by high intraocular pressure within the eye, result in an injury to the optic nerve. Fundus images are widely used to detect diagnosis of glaucoma. Therefore, image processing technology is used in various systems that aid in screening as glaucoma screening system can be used to screen for glaucoma. The use of glaucoma screening system can help solve cost issues and the workload of healthcare professionals. This study proposes a method for identifying glaucoma from fundus images by using a fusion three feature to find glaucoma’s significant by using wavelet decomposition and texture such as Discrete Wavelet Transform (DWT), Principal Components Analysis (PCA), and Local Binary Patterns (LBP). Support vector machine (SVM) is used to classify high accuracy at 95% by using tenfold cross-validation with HRF Database and using fusion three feature as DWT, PCA, and LBP.


KEYWORDS

Glaucoma, Fundus Image, Data Mining, Feature Extraction, Feature Ranking, Classification.



Concatenation Technique in Convolutional Neural Networks for COVID-19 Detection Based on X-ray Images

Yakoop Razzaz Hamoud Qasim, Habeb Abdulkhaleq Mohammed Hassan AL-Sameai, AbdulelahAbdulkhaleq Mohammed Hassan, Department of Mechatronics and Robotics Engineering, Taiz University, Yemen


ABSTRACT

In this paper we present a Convolutional Neural Networks consisting of NASNet and MobileNet in parallel (concatenation) to classify three classes COVID-19, normal and pneumonia depending on a dataset of 1083 x-ray images divided into 361 images for each class. VGG16 and RESNet152-v2 modelswere also prepared and trained on the same dataset to compare performance of the proposed model with the performance of these two models. After training the networks and verifying their performance, an overall accuracy 96.91% for the proposed model, 92.59% for VGG16 model and 94.14% for RESNet. We obtained accuracy, sensitivity, specificity and precision 99.69%, 99.07%, 100% and 100% respectively for the proposed model related to the COVID-19 class. These results were better than the results of other models. The conclusion, neural networks are built from models in parallel are effective when the data available for training are small and the different classes features are similar.


KEYWORDS

Deep Learning, Concatenation Technique, Convolutional Neural Networks, COVID-19, Transfer Learning.



Finding Music Formal Concepts Consistent with Acoustic Similarity

Yoshiaki OKUBO, Faculty of Information Science and Technology, Hokkaido University N-14 W-9, Sapporo 060-0814, Japan


ABSTRACT

In this paper, we present a method of finding conceptual clusters of music objects based on Formal Concept Analysis. A formal concept (FC) is defined as a pair of extent and intent which are sets of objects and terminological attributes commonly associated with the objects, respectively. Thus, an FC can be regarded as a conceptual cluster of similar objects for which its similarity can clearly be stated in terms of the intent. We especially discuss FCs in case of music objects, called music FCs. Since a music FC is based solely on terminological information, we often find extracted FCs would not always be satisfiable from acoustic point of view. In order to improve their quality, we additionally require our FCs to be consistent with acoustic similarity. We design an efficient algorithm for extracting desirable music FCs. Our experimental results for The MagnaTagATune Dataset shows usefulness of the proposed method.


KEYWORDS

formal concept analysis, music formal concepts, music objects, terminological similarity, acoustic similarity.



Improving the Previous Result for the Prediction of Stenosis in Coronary Arteries using Random Forest with Smote Filter and Swarm Search

Akansha Singh and Ashish Payal, Department of Computer Science and Engineering, GGSIP University, Dwarka, Delhi, India


ABSTRACT

An early prediction of a noxious disease like coronary artery disease (CAD) is crucial since it has caused millions of deaths a year. Even after performing bountiful researches, we are still not able to procure a final feature set for the early prediction of CAD. This study has enlightened two main factors that have much helped in achieving excellent results. They are dataset imbalance problem and metaphor (swarm) searches for feature selection. The dataset imbalance problem has resolved using the Synthetic Minority Oversampling Technique (SMOTE). Also, along with swarm searches, several other feature selection methods have been compared concerning some supervised classifiers. After comparing all the methods, the study shows that swarm searches and the random forest classifiers have surpassed the other methods. The accuracies obtained for left anterior descending (LAD), left circumflex (LCX), and right coronary artery (RCA) are 97.2%, 95.97%, and 94.48%, respectively.


KEYWORDS

CAD, LAD, LCX, RCA, SMOTE, CFS, Metaphor Search, Random Forest, Naïve Bayes.



Multi-sensor Calibration Method based on Master-slave Coupling for 3D Reconstruction

Minhtuan Ha1,2, Dieuthuy Pham1,2, Yucheng Li1 and Changyan Xiao1, 1College of Electrical and Information Engineering, Hunan University, Changsha 410208, China, 2Faculty of Electrical Engineering, Saodo University, Haiduong 170000, Vietnam


ABSTRACT

Multi-sensor systems are known as an effective solution to the problems in 3D reconstruction such as a small field-of-view and self-occlusion. However, the dispersed distribution of views of the imaging system is still challenging a global calibration for the whole system. In this paper, a complete set of high-precision calibration scheme is presented. Firstly, the problem of control point extraction of projector is solved by using the method of mark reconstruction. Then, a multi-sensor calibration method based on master-slave coupling is proposed, and a global calibration objective function is established for the system by using the idea of the bundle adjustment method. The calibration parameters of the system are obtained through optimization, which can avoid complex point cloud registration operation. The experimental results show that the average center of gravity error of the control points of the proposed method is 5.729um which is much better than the conventional methods.


KEYWORDS

Multi-sensor calibration, Master-slave coupling, 3D reconstruction & Point cloud optimization.



A Grid-point Detection Method based on U-net for a Structured Light System

Dieuthuy Pham1,2, Minhtuan Ha1,2 and Changyan Xiao1, 1College of Electrical and Information Engineering, Hunan University, Changsha 410208, China, 2Faculty of Electrical Engineering, Saodo University, Haiduong 170000, Vietnam


ABSTRACT

Accurate detection of the feature points of the projected pattern plays an extremely important role in one-shot 3D reconstruction systems, especially for the ones using grid pattern. To solve this problem, this paper proposes a grid-point detection method based on U-net. A specific dataset is designed that includes the images captured with the two-shot imaging method and the ones acquired with one-shot imaging method. Among them, the images in the first group after labeled as the groundtruth images and the images captured at the same pose with one-shot method are cut into small patches with the size of 64x64 pixels then feed to the training set. The remaining of the images in the second group is the test set. The experimental results show that, our method can achieve a better detecting performance with higher accuracy in comparison with the previous methods.


KEYWORDS

Feature point detection, U-net architecture, Structured light system & Grid pattern.



Multi Scale Temporal Graph Networks for Skeleton-based Action Recognition

Tingwei Li1 and Ruiwen Zhang2, 1Department of Automation Tsinghua University, Beijing, China, 2Department of Computer Science Tsinghua University, Beijing, China


ABSTRACT

Graph convolutional networks (GCNs) can effectively capture the features of related nodes and improve the performance of model. More attention is paid to employing GCN in Skeleton-Based action recognition. But existing methods based on GCNs have two problems. First, the consistency of temporal and spatial features is ignored. To obtain spatiotemporal features simultaneously, we design a generic representation of skeleton sequences for action recognition and propose a novel model called Temporal Graph Networks (TGN). Secondly, the adjacency matrix of graph describing the relation of joints are mostly depended on the physical connection between joints. To appropriate describe the relations between joints in skeleton graph, we propose a multi-scale graph strategy, adopting full-scale graph, part-scale graph and core-scale graph to capture the local features of each joint and the contour features of important joints. Experiments were carried out on two large datasets and results show that TGN with our graph strategy outperforms state-of-the-art methods.


KEYWORDS

Skeleton-based action recognition, Graph convolutional network, Multi-scale graphs.



Artist, Style and Year Classification using Face Recognition and Clustering with Convolutional Neural Networks

Doruk Pancaroglu, Department of Computer Engineering, Hacettepe University, Ankara, Turkey


ABSTRACT

Artist, year and style classification of fine-art paintings are generally achieved using standard image classification methods, image segmentation, or more recently, convolutional neural networks (CNNs). This works aims to use newly developed face recognition methods such as FaceNet that use CNNs to cluster fine-art paintings using the extracted faces in the paintings, which are found abundantly. A dataset consisting of over 80,000 paintings from over 1000 artists is chosen, and three separate face recognition and clustering tasks are performed. The produced clusters are analyzed by the file names of the paintings and the clusters are named by their majority artist, year range, and style. The clusters are further analyzed and their performance metrics are calculated. The study shows promising results as the artist, year, and styles are clustered with an accuracy of 58.8, 63.7, and 81.3 percent, while the clusters have an average purity of 63.1, 72.4, and 85.9 percent.


KEYWORDS

Face Recognition, Clustering, Convolutional Neural Networks, Art Identification.



Smart Farming

Oliver L. Iliev1, Ahmad Zakeri2, Bojan Despodov1, Navya Venkateshaiah2, Simona Ivkovska1, Kyaw Min Naing2, Aleksandar Stojkovski1, 1Institute of Applied Sciences, American University of Europe - FON, Av. Kiro Gligorov, bb. 1000 Skopje, Republic of Macedonia, 2School of Engineering, University of Wolverhamton, , Wulfruna St. WV1 1LY, Wolverhampton, United Kingdom


ABSTRACT

Contemporary Agriculture is facing numerous challenges including: continually increasing demand for quality food, shortages of labour and arable land, irrigation water reduction, increased soil contamination, loss of yields due to the plant diseases and pests. In such circumstances, to maintain the efficiency of the agricultural industry the sector needs to resort to the latest networking and artificial Intelligence (AI) techniques in order to optimize resources and sustainably produce quality and ecologically healthy food. An integrated tool based on Internet of Things (IoT) and Artificial Neural Networks (ANN) technologies will enable the industry to collect, process, transmit data and make autonomous decisions and actions based on incorporated informal knowledge obtained through using Fuzzy logic without need of human interaction. The capabilities offered by IoT including basic communications infrastructure (used to connect smart devices - from sensors, vehicles, Unmanned Aerial Vehicles (UAVs), to user-friendly mobile devices - using the Internet) and a range of services, such as local or remote information retrieval, intelligent information analysis, pattern recognition and processes of autonomous decision making based on AI and agriculture automation. There is no doubt that such integrated technology will revolutionize the agricultural industry which is probably one of the most inefficient sectors today. In this paper we present our current project status and further developments.


KEYWORDS

Artificial Intelligence, Artificial Neural networks, Fuzzy Logic, Internet of Things, Unmanned Aerial Vehicles.



Left to Right-right Most Parsing Algorithm with Lookahead

Jamil Ahmed, AvantureBytes, Canada


ABSTRACT

Left to Right-Right Most (LR) parsing algorithm is a widely used algorithm of syntax analysis. It is contingent on a parsing table whereas the parsing tables are extracted from the grammar. The parsing table specifies the actions to be taken during parsing. It requires that the parsing table should have no action conflicts for the same input symbol. This requirement imposes a condition on the class of grammars over which the LR algorithms work. However, there are grammars for which the parsing tables hold action conflicts. In such cases, the algorithm needs a capability of scanning (looking-ahead) next input symbols ahead of the current input symbol. In this paper, a ‘Left to Right’-‘Right Most’ parsing algorithm with lookahead capability is introduced. The “look-ahead” capability in the LR parsing algorithm is the major contribution of this paper. The practicality of the proposed algorithm is substantiated by the parser implementation of the Context Free Grammar (CFG) of an already proposed programming language “State Controlled Object Oriented Programming” (SCOOP). SCOOP’s Context Free Grammar has 125 productions and 192 item sets. This algorithm parses SCOOP while the grammar requires to ‘look ahead’ the input symbols due to action conflicts in its parsing table. Proposed LR parsing algorithm with lookahead capability can be viewed as an optimization of ‘Simple Left to Right’-‘Right Most’ (SLR) parsing algorithm.


KEYWORDS

Left to Right-Right Most (LR) Parsing, Syntax Analysis, Bottom-Up parsing algorithm.



Concept Disambiguation in Wikification using Multiple Overlapping Contexts

Mozhgan Saeidi and Evangelos Milios, Department of Computer Science, Dalhousie University, Halifax, Canada


ABSTRACT

Wikification is a method to automatically enrich a text with links to Wikipedia as an encyclopedic knowledge base. An existing approach to speed up coherence-based disambiguation is to divide a large document into chunks, identify a ‘’key-entity’’ in each chunk, and for each entity, pick the most similar sense to the chosen meaning of the key-entity. The partitioning of the input into disjoint chunks means that the most appropriate key-entity to disambiguate a given mention may be in an adjacent chunk. This negatively affects the accuracy of this method. In this paper, we demonstrate that using overlapping windows instead of disjoint chunks increases the accuracy of the Wikifier, while increasing its computational cost only slightly. A careful inspection of the word senses chosen by our method revealed that our method corrects most of the disambiguation errors made by the baseline method, due to the partition of input into disjoint chunks.


KEYWORDS

Wikification, Word Sense Disambiguation, Keyword Extraction, Wikipedia, Semantic Annotation.



A Graph-based Semantical Extractive Text Summarization

Mina Samizadeh1 and Behrouz Minaei-Bidgoli2, 1Department of Computer and Information Sciences, University of Delaware, Newark, Delaware, USA, 2Department of Computer Engineering, Iran University of Science and Technology, Tehran, Iran


ABSTRACT

in recent years, there has been an explosion in the amount of information which produced from various sources with different topics. To understanding this massive amount of information and knowledge we need to condense the important information in the form of summaries. Hence, there is an intense and growing interest among the research community for developing new approaches to automatically summarize the text which can effectively provide the main idea of the topic to be useful. An optimized text summarization system generates a summary, a short length text which includes important information of the document. Many researchers have been trying to improve techniques for generating summaries by a machine which are similar to the human-made summaries. As an unsupervised learning method, TextRank algorithm (An extension of PageRank algorithm which is the base algorithm of Google search engine for searching pages and ranking them) performs well on the large-scale text mining, especially for text summarization or keyword extraction [1]. It automatically extracts important sentences from the original, but it neglects the semantic similarity between sentences and words which has a significant effect on the results. For overcoming this important problem in this work we proposed a new method to add semantic to this algorithm by training a doc2vec model and acquiring vector for each training set sentences. Furthermore, by calculating cosine similarity between sentences in document we can weigh the relationship between sentences and provide edges weights in a graph which nodes are sentences in the input text we are summarizing, then apply TextRank algorithm to return most important sentences by assuming that sentences which are more important have higher score in the relationships with others and contain more useful information of the input document.


KEYWORDS

Text summarization, text clustering, word2vec model, doc2vec model, TextRank algorithm.



Predicting Failures of Molteno and Baerveldt Glaucoma Drainage Devices Using Machine Learning Models


Paul Morrison1, Maxwell Dixon2, Arsham Sheybani2, Bahareh Rahmani1, 1Fontbonne University, Mathematics and Computer Science Department, St. Louis, MO, 2Washington University, Department of Ophthalmology and Visual Sciences, St. Louis, MO


ABSTRACT

The purpose of this retrospective study is to measure machine learning models' ability to predict glaucoma drainage device (GDD) failure based on demographic information and preoperative measurements. The medical records of sixty-two patients were used. Potential predictors included the patient's race, age, sex, preoperative intraocular pressure (IOP), preoperative visual acuity, number of IOP-lowering medications, and number and type of previous ophthalmic surgeries. Failure was defined as final IOP greater than 18 mm Hg, reduction in IOP less than 20% from baseline, or need for reoperation unrelated to normal implant maintenance. Five classifiers were compared: logistic regression, artificial neural network, random forest, decision tree, and support vector machine. Recursive feature elimination was used to shrink the number of predictors and grid search was used to choose hyperparameters. To prevent leakage, nested cross-validation was used throughout. Overall, the best classifier was logistic regression.



A Predictive Model for Kidney Transplant Graft Survival using Machine Learning


Eric S. Pahl1, W. Nick Street2, Hans J. Johnson3 and Alan I. Reed4, 1Health Informatics, University of Iowa, Iowa, USA, 2Management Sciences, University of Iowa, Iowa, USA, 3Electrical and Computer Engineering, University of Iowa, Iowa, USA, 4Organ Transplant Centre, University of Iowa, Iowa, USA


ABSTRACT

Kidney transplantation is the best treatment for end-stage renal failure patients. The predominant method used for kidney quality assessment is the Cox regression-based, kidney donor risk index. A machine learning method may provide improved prediction of transplant outcomes and help decision-making. A popular tree-based machine learning method, random forest, was trained and evaluated with the same data originally used to develop the risk index (70,242 observations from 1995-2005). The random forest successfully predicted an additional 2,148 transplants than the risk index with equal type II error rates of 10%. Predicted results were analyzed with follow-up survival outcomes up to 240 months after transplant using Kaplan-Meier analysis and confirmed that the random forest performed significantly better than the risk index (p lessthan 0.05). The random forest predicted significantly more successful and longer-surviving transplants than the risk index. Random forests and other machine learning models may improve transplant decisions.


KEYWORDS

Kidney Transplant, Decision Support, Random Forest, Health Informatics, Clinical Decision Making, Machine Learning & Survival Analysis.



Trust Computational Analysis of an Internet of Things

M.Rajendra Prasad1 and D.Krishna Reddy2, 1Department of Electronics and Communication Engineering, Vidya Jyothi Institute of Technology, Hyderabad, 2Department of Electronics and Communication Engineering, Chaitanya Bharathi Institute of Technology, Hyderabad


ABSTRACT

The immense growth of wireless applications running on dedicated digital systems known as an Internet of Things in Communication Engineering domain enables telecom application development engineers to analyses various issues and it is necessary to resolve these issues at the early stages in product development life cycle. A novel Trust Computational Methodology (TCM) is developed to evaluate and analyze the trust performance of hardware and software on Unix/Linux/POSIX operating systems targeted for ARM based Internet of Thing (IoT) application. It is also used to explore various security issues related to hardware and firmware of an IoT. This paper describes the evaluation of TCM to improve the network performance of packet transmission and reception of an IoT device with un trusted components converting in to secure aware components. For the proposed approach an extensive testing is carried out and the performance of IoT is measured and analysed with Default Computational Methodology (DCM) systems with trusted computation method using different network parameters of an IoT. The obtained results prove the effectiveness when compared to existing testing methodologies of an IoT.TCM not only increases the system level performance of IoT but also helps to meet the telecom application constraints of the developer. An active Denial of Service (DoS) Attack is tested and discussed with TCM and DCM in the results.


KEYWORDS

Internet of Things (IoT), Time Slot Packets, Trust Computational Methodology, Performance Analysis, Denial of Service Attack.