7th International Conference on Bioinformatics & Biosciences (BIOS 2021)

December 18 ~ 19, 2021, Dubai, UAE

Accepted Papers

Predictive Modelling of Covid-19 Stimulus Funds Paid for Nursing Home Quality Incentive Program

Omar Al-Azzam and Paul Court, Department of Computer Science and Information Technology, St. Cloud State University, St. Cloud, MN 56301, USA

ABSTRACT

Painstaking measures should be taken to determine how federal dollars are spent. Proper justification for allocation of funds rooted in logic and fairness leads to trust and transparency. The COVID-19 pandemic has warranted rapid response by government agencies to provide vital aide to those in need. Decisions made should be evaluated in hindsight to see if they indeed achieve their objectives. In this paper, the data collected in the final four months of 2020 to determine funding for nursing home facilities via the Quality Incentive Program will be analysed using data mining techniques. The objective is to determine the relationships among numeric variables and formulae given. The dataset was assembled by the Health Resources and Services Administration. Results are given for the reader’s insight and interpretation. With the data collection and analytical process, new questions come to light. These questions should be pondered for further analysis.

KEYWORDS

Predictive modelling, Cross validation, Linear Regression.


HVAC Automation through Deep Reinforcement Learning

Kyle J. Cantrell1 and Carlos W. Morato, PhD2, 1Department of Robotics Engineering, Worcester Polytechnic Institute, Worcester, USA, 2Department of Robotics Engineering, Worcester Polytechnic Institute, Worcester, USA

ABSTRACT

We present in this text a framework for developing, simulating, assessing, and deploying a Deep Q-Learning Agent capable of “dropping-in” in place of existing classical HVAC controllers. The necessary aspects of integrating into modern building automation networks are discussed, along with other Deep Reinforcement Learning based approaches for HVAC. Benchmarks between several Deep Q-Networks and traditional classical control algorithms are demonstrated. Finally, a framework based on Open AI’s gym and ASHRAE’s BACnet is detailed to demonstrate how to deploy Deep Q-Learning methods to the field.

KEYWORDS

Deep learning, deep reinforcement learning, deep q-learning, building automation, BACnet.


Anchor’s Density Minimization for Localization in WSN

Nour Zaarour, Nadir Hakem and Nahi Kandil, Engineering School, UQAT-LRTCS, Rouyn-Noranda, Canada

ABSTRACT

In wireless sensor networks (WSN) high-accuracy localization is crucial for both of WNS management and many other numerous location-based applications. Only a subset of nodes in a WSN is deployed as anchor nodes with their locations a priori known to localize unknown sensor nodes. The accuracy of the estimated position depends on the number of anchor nodes. Obviously, increasing the number or ratio of anchors will undoubtedly increase the localization accuracy. However, it severely constrains the flexibility of WSN deployment while impacting costs and energy. This paper aims to drastically reduce anchor number or ratio of anchor in WSN deployment and ensures a good trade-off for localization accuracy. Hence, we present an approach to decrease the number of anchor nodes without compromising localization accuracy. Assuming a random string WSN topology, the results in terms of anchor rates and localization accuracy are presented and show significant reduction in anchor deployment rates from 32% to 2%.

KEYWORDS

Wireless sensor network (WSN), anchors, received signal strength (RSS), localization, path-loss exponent (PLE), connectivity..


Improving Performance of Heart Disease through Feature Dependency Extraction

Usman Abdullahi Musa, Muhammad Sirajo Aliyu PhD, Abdurrazak Umar Abdullahi, Duda Sani Abdullahi, Saadatu Gimba, Federal University Dutse, Nigeria

ABSTRACT

Cardiovascular disease is a significant public health concern responsible for many deaths annually. It also causes a significant amount of morbidity and impairment to humans. An increase in health care data through the use of electronic health record (EHR) systems makes it possible to perform analysis on the data and forecasting diverse scenarios for numerous fields. The need to make accurate predictions of disease through the use of machine learning algorithms isas a result of many factors the human mind cannot process. Numerous machine learning algorithms such as Random Forest, Logistic Regression, ANN, K-Nearest Neighbor, SVM, etc. have been applied on Cleveland heart datasets however, there is a limit to modeling using Bayesian Network (BN). The widely used 14 features of the Cleveland heart data collected from the UCI repository used and modeled using BN modeling in this study. Experimental results prove that the use of feature reduction techniques does effectively improve the prediction performance of the classifier. The study aim is to check how feature reduction could increase performance and extract those feature dependency that affects the performance of the classifier.

KEYWORDS

Machine Learning,Bayesian Network (BN), Naïve Bayes Logistic Regression, KNN, Heart Disease, Prediction.


Appending Security Theories to Projects in Upper-Division CS Courses

Vahab Pournaghshband1 and Hassan Pournaghshband2, 1Computer Science Department, University of San Francisco, 2Software Engineering & Game Development Department Kennesaw State University

ABSTRACT

Software systems have been under continued attacks by malicious entities, and in some cases, the consequences have been catastrophic. To tackle this pervasive problem, the academic world has significantly increased the offering of computer security-related courses during the past decade. In fact, offering these courses has become a standard part of the curriculum for many computing disciplines. While many proposals suggest adding this appealing topic into the non-security CS courses, many faculties do not entirely support the idea for a convincing reason. They rightfully claim that each one of these courses is already packed with concepts and materials developed toward that course, leaving not much room for other topics. In this study, we show how exposing students to security concepts can be incorporated into upper-division CS courses without increasing the normally required efforts needed by students as well as the instructor. We show how to develop a project of this nature that can be appended to an already existing course project. We have successfully employed our proposed approach in two of our core CS courses and present them in this paper as case studies.

KEYWORDS

Computer Science Education, Computer Security, Security Mindset.


Emotion in Virtual Reality

Darlene Barker, Haim Levkowitz, Department of Computer Science, University of Massachusetts Lowell, Lowell, MA, 01854

ABSTRACT

To make more impact on social interaction within virtual reality (VR), we need to consider the impact of emotions on our interpersonal communications and how we can have this within VR. In this paper we are proposing the use of emotions that are based upon the use of voice, facial expressions and touch to create the emotional closeness and nonverbal intimacy needed in nonphysical interpersonal communication. Virtual and long-distance communications lack the physical contact that we have with in-person interaction without the nonverbal cues to enhance what the conversation is conveying. To enhance communications, the use of haptic devices and tactile sensations can help with the delivery of touch between parties, and machine learning for emotion recognition based on data collected from the other sensory devices; all working towards better long-distance communications. In this paper we present a direction for further research on how to enhance long-distance communications within VR with the use of emotion recognition and touch to achieve a close to real interaction.

KEYWORDS

Virtual reality, touch, appearance, virtual character, long-distance communication, interpersonal touch, human-machine interfaces.


Efficient Segmentation and Feature Selection using Harris Hawks Optimization Technique for Brain Glioma Detection in MRI Images

Champakamala S, Karunakara K, Dept. of Information Science and Engineering, Sri Siddhartha Institute of Technology, Siddhartha Academy of Higher Education, Tumakuru 572107, Karnataka, India

ABSTRACT

Nowadays, the medical technology named MRI (Medical Resonance Imaging) is extensively used for the detection of tumor (glioma) and in the diagnosis of different types of tissue abnormalities. Automatic segmentation and classification procedures from medical images are very important for earlier treatment planning and clinical assessment of brain tumors. The advancements in the field of computerized medical imaging play a major role in systematic research and support the doctors to offer essential treatments to patients with quick decision making. This work focus on the efficient segmentation and classification using deep learning (DL) models motivated by diagnosing tumor growth and treatment processes. The integration of the following techniques such as pre-processing, segmentation, extraction, selection and classification are used in this work to detect the brain tumor. Initially, pre-processing is done to improve the quality of image. Then segmentation is done using neutrosophic set expert maximum fuzzy sure entropy (NSEMFSE) with OTSU method. Next step is feature extraction, which make use of GLCM (Gray level co-occurrence matrix), SIFT (Scale-Invariant Feature Transform)descriptor and BoW(Bag of Words) techniques. Harris hawks optimization (HHO) algorithm is used for feature selection. Finally, the brain tumor is classified as benign and malignant using a VWSE (variable-wise weighted stack auto-encoder) and for the further classification of the malignant tumor as low, medium and high using social ski-driver (SSD) optimization algorithm. Simulation is performed in PYTHON platform. The dataset used is BRATS 2020. The performance of the proposed method is measured in terms of accuracy, precision, recall and F1 score.

KEYWORDS

Brain tumor, Deep Neural Networks, Segmentation, Medical Imaging, BRATS 2020.


Risk Analysis in the Preparation of a Business Continuity Plan (BCP) in it Services: A Case Study of Universitas Indonesia

Akmal Gafar Putra and Betty Purwandari, Magister of Information Technology, Universitas Indonesia, Depok, Indonesia

ABSTRACT

Based on the Horizons Scan Report 2021 by BSI, the top 6 threats to organizations today are pandemics, health incidents, safety incidents, IT and telecommunications outages, cyber-attacks, and extreme weather. Universitas Indonesia (UI), as a modern, comprehensive, and open campus, strives to become a leading research university globally. As the IT service manager at UI, the Directorate of Information Systems and Technology (DSTI) has the task of strengthening service management by implementing risk management and security management in line with relevant laws and policies. The main problem for DSTI as an IT service at UI is that there are no documents related to risk management and information security management, resulting in IT services’ failure. This year, there have been four data center failures due to power and UPS problems. DSTI wants to improve IT services at UI by implementing risk management and Business Continuity Management System (BCMS). This study aims to conduct a risk analysis to design a Business Continuity Plan (BCP) for IT services at the University of Indonesia. The research was conducted both qualitatively and quantitatively. The OCTAVE qualitative method was carried out in finding a list of risks on critical assets in IT services at UI. A quantitative approach is needed to rank the risk list using a questionnaire and FMEA calculations to get a risk priority number. This study separates the risk of general assets and information system assets. For critical assets, it is generally found that two are at a very high level, one is high, eight risks are at a low level, and 12 are at a very high level, for information system assets found 12 assets with very high risk, three medium and one low.

KEYWORDS

Risk Analysis, OCTAVE, FMEA, ISO 22301:2019, Business Continuity Plan.


Public E-Service Adoption in India: Analyzing Critical Institutional Factors

Syed Taha Owais1 and Tanvir Ahmad2, 1National Informatics Centre, New Delhi, India, 2Faculty of Engineering, Jamia Millia Islamia, New Delhi, India

ABSTRACT

The façade of public e-service rests on many disparate pillars and the onus of its stability lies on the shoulders of the public administrators. But this is not one-man’s job and requires institutional intervention. Researchers view the problem of adoption from the lens of citizens. But in practice public e-service cannot sustain unless adopted by public administrators. This has led to the gap in the conceptualization of adoption and its implementation. This paper analyzes institutional factors with practitioner’s concern. Institutional determinants are identified through bibliographic study. Broad research questions are framed. Then ranked them through online survey. In order of relative importance index, these are then analyzed. Large number of domain registration, higher number of online transactions and improvement of India’s ranking at international level confirm moderating effect of institutional factors on adoption of public e-services in India. The findings of this research have applied and managerial implications for public administrators.

KEYWORDS

Public e-service, adoption, institutional factors, digital India initiatives, Aadhaar.


Proposal of Combined Lstm+Gan Approach for Video Anomaly Detection

Mahmudul Huq, Department of Computer Science Technical University of Kaiserslautern Kaiserslautern, Germany

ABSTRACT

Video surveillance has been recently widely applied due to highlighted safety concerns. Considering the inefficiency and cost of traditional surveillance with humans, an automated anomaly detection system gains increasing interest from academia and industry. Speedy abnormal event detection meets the growing demand to process an enormous number of surveillance videos. Based on inherent redundancy of video structures, we are proposing an efficient learning framework by combining Long Short-Term Memory (LSTM) architecture and Generative Adversarial Network (GAN) architecture.

KEYWORDS

Anomaly Detection, Recurrent Neural Network (RNN), Long Short-Term Memory (LSTM), Generative Adversarial Network (GAN).


Lungs Disease Identification in CXR Images Using Deep Learning

Ramu Mullangi, Tejasri Javvadi, Bhavani pappu, RGUKT Srikakulam, India

ABSTRACT

COVID-19 and Tuberculosis are infectious diseases that primarily affect the lungs and are having some same symptoms like cold, fever, and difficulty in breathing. A major cause caused by these diseases is death. Pneumonia, COVID-19, and TB are caused by airborne droplets. To confirm any of these diseases using traditional methods requires lots of time. But, both of these can be identified using Chest X-Ray images. Moreover, the RT-PCR test for COVID-19 is giving some false positives which are very dangerous. Since the time taken for getting the x-ray is less and effective, this method can be used for low latency requirements. In this work, we built two deep learning classification models using two different convolutional layers. The first model classifies COVID-19, TB, Others. Here, Others may be pneumonia or normal. Hence, the second model classifies pneumonia and normal images.

KEYWORDS

COVID-19, Tuberculosis, Pneumonia, RT-PCR, Chest X-Ray images, convolution, deep learning, classification.


Research on the Model of Interaction and Coupling Relationship between Standard System Construction and Process Management

Wang Chaofan1 and Chen Xinyue2, 1North China Electric Power University, School of Economics and Management, Wang Chaofan, Beijing, China, 2Qingdao University, School of Quality and Standardization, Chen Xinyue, Shandong, China

ABSTRACT

The fundamental motivation for enterprises to build a standard system is to meet the subjective needs of the unique use nature of standards, such as benchmarking and criteria, from input to output in the baseline relationship of all their business processes. How to guess the credibility and value of the standard system requires collaborative and mature processes to mediate cognition. Firstly, this paper clarifies the philosophical relationship between standard system construction and process management by using the general system structure theory. Secondly, it systematically summarizes the interaction mechanism between the two from the perspective of methodology. Finally, it designs a conceptual model with the "human regulation" composite system as the core and the coupling between standard system construction and the outer edge of process management, In order to provide a new integration idea for enterprise standardization management and process management to jointly realize the optimal value utility.

KEYWORDS

Standardization Discipline, Construction of Standard system, Process Management; Coupling Model.


Comparative Analysis of Automatic Digital Modulation Classification using Deep Learning

Kamala Sudharani 1, Javvadi Tejasri2, kommana leelavathi3, 1Assistant professor, Department of ECE, RGUKT Srikakulam, Andhra pradesh, INDIA, 2Department of ECE, RGUKT Srikakulam, Andhra pradesh, INDIA, 3Department of ECE, RGUKT Srikakulam, Andhra pradesh, INDIA

ABSTRACT

Automatic Modulation Classification (AMC) is a key technology of non-cooperative communication systems, which has been playing a prominent role in military, security, and civilian telecommunication applications for decades. The traditional approaches such as likelihood and feature-based algorithms have been widely studied for AMC. These algorithms are limited to a set of modulation and SNR levels and require signal and channel parameters in advance. The purpose of this research is to use the Deep Learning algorithms for AMC. Deep learning (DL) is an elegant classification technique that has outstanding success in many application domains, Recently DL-based AMC methods have been proposed with outstanding performance. In this paper, we utilized CNN for Automatic Digital Modulation Classification (ADMC) and compare our network with DNN, Resnet, and Inception. The CNN model has shown to be comparable classification accuracy without the necessity of manual feature selection. The accuracy of a model is compared with SNR values ranging from -20dB to 18dB at step size 2. Finally, we address the issue of training CNN to differentiate the QAM16 and QAM64.

KEYWORDS

Automatic digital modulation classification, Deep learning, convolutional neural network, Resnet, inception, Deep neural network.


Dempster-Shafer and Multi-Focus Image Fusion using Local Distance

Ias Sri Wahyuni1 and Rachid Sabre2, 1University of Burgundy, Gunadarma University, 2Laboratory Biogéosciences CNRS, University of Burgundy/Agrosup Dijon, France

ABSTRACT

The aim of multi-focus image fusion is to integrate images with different objects in focus so that obtained a single image with all objects in focus. In this paper, we present a novel multi-focus image fusion method based using Dempster-Shafer Theory based on local variability (DST-LV). This method takes into consideration the information in the surrounding region of pixels. Indeed, at each pixel, the method exploits the local variability that is calculated from quadratic difference between the value of pixel I(x,y) and the value of all pixels that belong to its neighbourhood. Local variability is used to determine the mass function. In this work, two classes in Dempster-Shafer Theory are considered: blurred part and focus part. We show that our method give the significant result.

KEYWORDS

Multi-focus-images, Dempster-Shafer Theory, local distance.


A Novel Attention-Based Network for Fast Salient Object Detection

Bin Zhang, Xiaojing Zhang and Ming Ma, College of Computer Science and Engineering, Inner Mongolian University, China, Hohhot

ABSTRACT

In the current salient object detection network, the most popular method is using U-shape structure. However, the massive number of parameters leads to more consumption of computing and storage resources. In this paper, we propose a new deep convolution network architecture with three contributions: (1) using smaller convolution neural networks (CNNs) to compress the model in our improved salient object features compression and reinforcement extraction module (ISFCREM) to reduce parameters of the model. (2) introducing channel attention mechanism to weigh different channels for improving the ability of feature representation. (3) applying a new optimizer to accumulate the long-term gradient information during training to adaptively tune the learning rate. The results demonstrate that the proposed method can compress the model to 1/3 of the original size nearly without losing the accuracy and converging faster and more smoothly on six widely used datasets of salient object detection compared with the state-of-the-art models.

KEYWORDS

Salient Object Detection, Optimization Strategy, Deep Learning, Model Compression, Vision Attention.


Combining Evidences from Auditory, Instantaneous Frequency and Random Forest for Anti-Noise Speech Recognition

Kun Liao, China Power Complete Equipment Co.,Ltd

ABSTRACT

Due to the shortcomings of acoustic feature parameters in speech signals, and the limitations of existing acoustic features in characterizing the integrity of the speech information,This paper proposes a method for speech recognition combining cochlear feature and random forest. Environmental noise can pose a threat to the stable operation of current speech recognition systems. It is therefore essential to develop robust systems that are able to identify speech under low signal-to-noise ratio. In this paper, we propose a method of speech recognition combining spectral subtraction, auditory and energy features extraction. This method first extract novel auditory features based on cochlear filter cepstral coefficients(CFCC) and instantaneous frequency(IF), i.e., CFCCIF. Spectral subtraction is then introduced into the front end of feature extraction, and the extracted feature is called enhanced auditory features(EAF). An energy feature Teager energy operator (TEO) is also extracted, the combination of them is known as a fusion feature. Linear discriminate analysis (LDA) is then applied to feature selection and optimization of the fusion feature. Finally, random forest(RF) is used as the classifier in a non-specific persons, isolated words, and small-vocabulary speech recognition system. On the Korean isolated words database, the proposed features (i.e., EAF) after fusion with Teager energy features have shown strong robustness in the nosiy situation. As shown in our experiments, in a speech recognition task, the optimization feature achieved display a high recognition rate and excellent anti-noise performance.

KEYWORDS

Cochlear filter cepstral coefficients, Teager energy features, Linear discriminate analysis, Random forest, speech recognition.

Contact Us

bios_conf@yahoo.com