7th International Conference on Computer Science, Engineering and Information Technology (CSEIT 2020)

September 26 ~ 27, 2020, Copenhagen, Denmark

Accepted Papers

Learning for E-Learning

Carsten Lecon1 and Marc Hermann2, 1Department of Computer Science, Media Computer Science, Aalen University, Germany and 2Department of Computer Science, User Experience, Aalen University, Germany

ABSTRACT

In response to the heterogeneity of previous knowledge of the students when beginning their studies, we present a solution, where undergraduate students as well as advanced students (hopefully) will benefit from ‘AdLeR’ (Additive Learning Resources): A tool for the rapid generation of small e-learning courses. The undergraduates can catch up lack of knowledge by our mini courses (self-regulated). The advanced students are involved in the development of our tool or in the creation process of learning material, which is suited for self-regulated learning. When implementing the tool, the students have to deal with various aspects of computer science domains for example, which consolidates their knowledge and their competences.

KEYWORDS

E-Learning, self-regulated learning, learning by teaching, XML, learning path, search functionality.


Magnetic Resonance Image Classification of Major Depression Disorder Based on Deep Learning

Yu Wang, Changyang Fu, Chongchong Yu, Weijun Su, School of Computer and Information Engineering, Beijing Technology and Business University, Beijing, China

ABSTRACT

Major depression disorder is one of the diseases with the highest rate of disability and morbidity. However, there are no effective biological characteristics or method to help doctors diagnose depression accurately and quickly. In the field of medical image processing, the analysis of neuroimaging data such as structural magnetic resonance image to extracting different pathological patterns, such as depression, is known as a challenging research. The success of deep learning in computer vision makes such type of research further progressed. In this paper, we extend the state-of-the-art deep learning models in the field of computer vision to 3D form to better extract representative features from 3D data, and firstly appliedthem to the diagnosis of depression. On this basis, we design a novel transfer learning method, where the networks is initialized with pre-trained weights from a similar larger dataset and then fine-tuned, to solve the problems caused by insufficient data. The experimental results show that the proposed networks can achieve excellent results and transfer learning can further improve the classification performance of depression and health control subjects compared with the frontier methods, which fully verifies the effectiveness and superiority of our method.

KEYWORDS

Depression, Structural Magnetic Resonance Image, Deep Learning, Transfer Learning, Classification.


The Novel Mobile ECG Sensor with Wireless Power Transmission for UNTACH Health Monitoring

Jin-Chul Heo, Eun-Bin Park, Chan-Il Kim, Hee-Joon Park and Jong-Ha Lee, Department of Biomedical Engineering, School of Medicine,Keimyung University, Daegu, Korea

ABSTRACT

For electromagnetic induction wireless power transmission using an elliptical receiving coil, we investigated changes in magnetic field distribution and power transmission efficiency due to changes in the position of the transmitting and receiving coils. The simulation results using the high-frequency structure simulator were compared with the actual measurement results. It has been shown that even if the alignment between the transmitting coil and the receiving coil is changed to some extent, the transmission efficiency on the simulator can be maintained relatively stable. The transmission efficiency showed the maximum when the center of the receiving coil was perfectly aligned with the center of the transmitting coil. Although the reduction in efficiency was small when the center of the receiving coil was within ± 10 mm from the center of the transmitting coil, it was found that the efficiency was greatly reduced when the receiving coil deviated by more than 10 mm. Accordingly, it has been found that even if the perfect alignment is not maintained, the performance of the wireless power transmission system is not significantly reduced. When the center of the receiving coil is perfectly aligned with the center of the transmitting coil, the transmission efficiency is maximum, and even if the alignment is slightly changed, the performance of wireless power transmission maintains a certain level. This result proposes a standardized wireless transmission application method in the use of wireless power for implantable sensors.

KEYWORDS

ECG, Implantable sensors, Simulation, Power transmission efficiency, Wireless power transmission.


Factors Influencing Traders’ Continuous Usage Intention to E-transaction Cards in Wholesale Markets of Agriproducts: An Empirical Case Study in China

Xuechao Sui1C and Xianhui Geng2, 1CSchool of Economics, Hefei University of Technology, Hefei, P.R. China, 2College of Economics and Management, Nanjing Agricultural University, Nanjing, P.R. China

ABSTRACT

Generalizing the e-transaction services in wholesale markets of agriproducts is seen as a way for the government in China to enhance its information services capabilities. These services could also facilitate the transactions of traders. We find that traders are reluctant to use e-transaction cards although they have physical access. This paper identifies factors influencing traders’ continuous usage intention to e-transaction cards in wholesale markets of agriproducts in China. Data were collected from 204 respondents through a self-administered survey completed by traders and analyzed by structural equation modeling (SEM). The results indicate that perceived ease of use (PEOU), perceived usefulness (PU) and perceived privacy security directly influence traders’ continuous usage intention. Moreover, PEOU and perceived transaction security indirectly influence continuous usage intention through PU. This research provides practical guidelines for decision makers to increase traders’ intention to e-transaction cards.

KEYWORDS

Wholesale markets of agriproducts, E-transaction, Continuous usage intention, China.


Evaluating the impact of different types of crossover and selection methods on the convergence of 0/1 Knapsack using Genetic Algorithm

Waleed Bin Owais, Iyad W. J. Alkhazendar, and Dr.Mohammad Saleh, Department of Computer Science and Engineering, Qatar University, Doha

ABSTRACT

Genetic Algorithm is an evolutionary algorithm and a metaheuristic that was intro- duced to overcome the failure of gradient based method in solving the optimization and search problems. The purpose of this paper is to evaluate the impact on the convergence of Genetic Al-gorithm vis-a'-vis 0/1 knapsack. By keeping the number of generations and the initial population Fixed, different crossover methods like one point crossover and two-point crossover were evaluated and juxtaposed with each other. In addition to this, the impact of different selection methods like rank-selection, roulette wheel and tournament selection were evaluated and compared. Our results indicate that convergence rate of combination of one point crossover with tournament selection, with respect to 0/1 knapsack problem that we considered, is the highest and thereby most efficient in solving 0/1 knapsack.

KEYWORDS

Genetic, Crossover, Selection, Knapsack, Roulette, Tournament, Rank, Single Point, Two Point, Convergence


VIRTFUN: Function Offload Methodology to Virtualized Environment

Carlos A Petry, University of Campinas, Brazil

ABSTRACT

The use of virtual machines (VM) has become popular with substantial growth for both personal and commercial use, especially supported by the progress of hardware and software virtualization technologies. There are several reasons for this adoption like: cost, customization, scalability and flexibility. Distinct domains of application, such as scientific, financial and industrial, spanning from embedded to cloud systems, taken advantage of this kind of machines to meet processing computational demands. However, there are setbacks: hardware handling, resources use, performance and management. This growth demands an effective support by the underlying virtualization infrastructure, which directly affects the hosts’ capacity in datacenters and cloud environment that support them. It is evident that the host native processing performs better than VMs, especially when using accelerator devices, where the common solution is to assign each device to a specific VM, instead of sharing it among multiples VMs. Beyond performance issues inside the host, we need to consider the VM performance when using accelerator devices. In this context, it is necessary to provide efficient mechanisms to manage and run VMs which can take advantages of high-performance devices, like FPGAs or even from software resources on the host. To assist this challenge, this paper proposes a methodology to improve communication performance of applications running on the VMs, VirtFun. To do so, we developed a framework able to offload pieces of application's code (vFunction) to host by means of secure data sharing between the application and device. The results achieved in our experiments demonstrated significant acceleration capacity for the guest application vFunction. The speedup reached 340% compared to conventional network execution, reaching maximum slowdown of 2.8% in the worst case and near to 0% in the best case considering the native execution.

KEYWORDS

Virtualization, performance, virtual machine, shared-memory.


COSM: Controlled Over-sampling Method. A Methodological Proposal to Overcome the Class Imbalance Problem in Data Mining

Gaetano Zazzaro, Software Development, Information Management and HPC Lab, CIRA (Italian Aerospace Research Centre), Capua (CE), Italy

ABSTRACT

The class imbalance problem is widespread in Data Mining and it can reduce the general performance of a classification model. Many techniques have been proposed in order to overcome it, thanks to which a model able to handling rare events can be trained. The methodology presented in this paper, called Controlled Over-Sampling Method (COSM), includes a controller model able to reject new synthetic elements for which there is no certainty of belonging to the minority class. It combines the common Machine Learning method for holdout with an oversampling algorithm, for example the classic SMOTE algorithm. The proposal explained and designed here represents a guideline for the application of oversampling algorithms, but also a brief overview on techniques for overcoming the problem of the unbalanced class in Data Mining.

KEYWORDS

Class imbalance problem, Data Mining, Holdout Method, Oversampling, Rare Class Mining, Undersampling


A Process for Complete Autonomous Software Display Validation And Testing (Using A Car-cluster)

Malobika Roy Choudhury, Innovation and Technology, SAP Labs India Pvt Lmt., Bengaluru, Karnataka, India

ABSTRACT

Every product industry goes through the process of product validation before its release. Validation could be effortless or laborious depending upon the process. Here in this paper, we define a process that can make the task-independent of constant monitoring. This method will not only make the work of test engineers easier it will also help the company meet stringent release deadlines with ease. Our method explores how to complete visual validation of the display screen using deep learning and image processing. In our example, we have applied the method over a car-cluster display screen. Our method breaks down the components of the screen then validate each component against its design and outputs a result predicting whether the displayed content is correct or incorrect. We are using models of YOLOv, Machine Learning (Positional value approximation), CNN, and few image processing techniques to predict the accuracy of each display component. These sets of algorithms compile to provide consistent results throughout and are being currently used to generate results for the validation process.

KEYWORDS

CNN, YOLO, display-validation


Analysis of the Displacement of Terrestrial Mobile Robots in Corridors Using Paraconsistent Annotated Evidential Logic Eτ

Flávio Amadeu Bernardini1, Marcia Terra da Silva1, Jair Minoro Abe1, Luiz Antonio de Lima1 and Kanstantsin Miatluk2, 1Graduate Program in Production Engineering Paulista University, Sao Paulo, Brazil, 2Bialystok University of Technology, Bialystok, Poland

ABSTRACT

This article proposes an algorithm for a servo motor that controls the movement of an autonomous terrestrial mobile robot using Paraconsistent Logic. The design process of mechatronic systems guided the robot construction phases. The project intends to monitor the robot through its sensors that send positioning signals to the microcontroller. The signals are adjusted by an embedded technology interface maintained in the concepts of Paraconsistent Annotated Logic acting directly on the servo steering motor. The electric signals sent to the servo motor were analyzed, and it indicates that the algorithm paraconsistent can contribute to the increase of precision of movements of servo motors.

KEYWORDS

Paraconsistent annotated logic, Servo motor, Autonomous terrestrial mobile robot, Robotics.


A Study on the Minimum Requirements for the On-line, Efficient and Robust Validation of Neutron Detector Operation and Monitoring of Neutron Noise Signals using Harmony Theory Networks

Tatiana Tambouratzisa, Laurent Panterab and Petr Stulikc, aDepartment of Industrial Management & Technology, University of Piraeus, Piraeus 185 34, Greece, bLaboratoire des Programmes Expérimentaux et des Essais en Sûreté, CEA/DES/ IRESNE/DER/SPESI/LP2E/, Cadarache, F-13108 Saint-Paul-Lez-Durance, France, cDiagnostics and Radiation Safety Department, ÚJV Řež a.s., Hlavní 130, Řež, 250 68 Husinec

ABSTRACT

On-line monitoring (OLM) of nuclear reactors (NRs) incorporates – among other priorities – the concurrent verification of (i) valid operation of the NR neutron detectors (NDs) and (ii) soundness of the captured neutron noise (NN) signals (NSs) per se. In this piece of research, efficient, timely, directly reconfigurable and non-invasive OLM is implemented for providing swift – yet precise – decisions upon the (i) identities of malfunctioning NDs and (ii) locations of NR instability/unexpected operation. The use of Harmony Theory Networks (HTNs) is put forward to this end, with the results demonstrating the ability of these constraint-satisfaction artificial neural networks (ANNs) to identify (a) the smallest possible set of NDs which, configured into (b) the minimum number of 3-tuples of NDs operating on (c) the shortest NS time-window possible, instigate maximally efficient and accurate OLM. A proof-of-concept demonstration on the set of eight ex-core NDs and corresponding NSs of a simulated Pressurized Water nuclear Reactor (PWR) exhibits (i) significantly higher efficiency, at (ii) no detriment to localization accuracy, when employing only (iii) half of the original NDs and corresponding NSs, which are configured in (iv) a total of only two (out of the 56 combinatorially possible) 3-tuples of NDs. Follow-up research shall investigate the scalability of the proposed methodology on the more extensive and homogeneous (i.e. “harder” in terms of ND/NS cardinality as well as of ranking/selection) dataset of the 36 in-core NSs of the same simulated NR.

KEYWORDS

Nuclear Reactor (NR), On-Line Monitoring (OLM), Neutron Noise (NN), Neutron Noise Signal (NS), Neutron Detector (ND), Computational Intelligence (CI), Artificial Neural Network (ANN), Harmony Theory Network (HTN), 3-tuple of NDs/NSs.


Penalized Bootstrapping for Reinforcement Learning in Robot Control

Christopher Gebauer and Maren Bennewitz, Humanoid Robots Lab, University of Bonn, Bonn, Germany

ABSTRACT

The recent progress in reinforcement learning algorithms enabled more complex tasks and, at the same time, enforced the need for a careful balance between exploration and exploitation. Enhanced exploration supersedes the requirement to hardly constrain the agent, e.g., with complex reward functions. This seems highly promising as it reduces the work for learning new tasks, while improving the agents performance. In this paper, we address deep exploration in reinforcement learning. Our approach is based on Thompson sampling and keeps multiple hypotheses of the posterior knowledge. We maintain the distribution over the hypotheses by a potential field based penalty function. The resulting policy is more performant in terms of collected reward. Furthermore, is our method faster in application and training than the current state of the art. We evaluate our approach in low-level robot control tasks to back up our claims of a more performant policy and faster training procedure.

KEYWORDS

Deep Reinforcement Learning, Deep Exploration, Thompson Sampling, Bootstrapping.


Deep Reinforcement Learning for Navigation in Cluttered Environments

Peter Regier, Lukas Gesing, and Maren Bennewitz, Humanoid Robots Lab, University of Bonn, Bonn, Germany

ABSTRACT

Collision-free motion is essential for mobile robots. Most approaches to collision-free and efficient navigation with wheeled robots require parameter tuning by experts to obtain good navigation behavior. In this paper, we aim at learning an optimal navigation policy by deep reinforcement learning to overcome this manual parameter tuning. Our approach uses proximal policy optimization to train the policy and achieve collision-free and goal-directed behavior. The output of the learned network are the robot’s translational and angular velocities for the next time step. Our method combines path planning on a 2D grid with reinforcement learning and does not need any supervision. Our network is first trained in a simple environment and then transferred to scenarios of increasing complexity. We implemented our approach in C++ and Python for the Robot Operating System (ROS) and thoroughly tested it in several simulated as well as real-world experiments. The experiments illustrate that our trained policy can be applied to solve complex navigation tasks. Furthermore, we compare the performance of our learned controller to the popular dynamic window approach (DWA) of ROS. As the experimental results show, a robot controlled by our learned policy reaches the goal significantly faster compared to using the DWA by closely bypassing obstacles and thus saving time.


New Hybrid Artificial Intelligent Models Basedon Optimized-support Vector Machine and Locallylinear Neuro-fuzzy for the Supplier Assessment Problem

Hasti Mirhadi and Ali Rafiee, Department of Mathematics, Islamic Azad University, Tehran, Iran

ABSTRACT

In sustainable logistics network, organizations are committed to have a systematic decision system set up to assist it receive right decisions. Among strategic decisions, supplier assessment outranks different decisions as far as significance. In addition, the appropriation of such strategic decision involves investigating a few criteria that add to the complexity of decision making in the logistics network. With the end goal of taking care of non-linear regression problems, two new hybrid neural network models known as least square-support vector machine (LS-SVM) and locally linear neuro-fuzzy with maximum generalization capacity have effectively been actualized. Firstly, the execution nature of the LS-SVM is perceived to famously shift contingent upon the thorough selection of its parameters. In this paper, a variable neighborhood search (VNS) which is a compelling meta-heuristic algorithm to comprehend this present reality engineering continuous optimization problems enhancement issues is considered to be incorporated with LS-SVM. Secondly, a locally linear neuro-fuzzy (LLNF), by regarding the ideas of neural networks and fuzzy sets theory concurrently, is acquainted with foresee the execution rating of suppliers. The presented model is prepared by a locally linear model tree learning method. To show the enhanced execution of our proposed incorporated models, a real data set is introduced from a contextual investigation of supplier assessment problem. Moreover, comparative evaluations between our proposed models and the conventional techniques is given. The test comes about just show the outperformance of our proposed hybrid models regarding estimation exactness and viable prediction.

KEYWORDS

least square-support vector machine, locally linear neuro-fuzzy, hybrid models, Network Protocols, Wireless Network, Mobile Network, Virus, Worms & Trojon.


Aerodynamic modeling of 3D compressor blades based on XGBoost

Shurong Hao1, Mingming Zhang2, 1School of Mathematics, Faculty of Science, Beijing University of Technology, Beijing 100124, China, 2School of Energy and Power, Beihang University, Beijing 100191, China

ABSTRACT

In order to quickly obtain the aerodynamic characteristics of the surface of the three-dimensional compressor blade, a method for modeling the aerodynamic force of the three-dimensional compressor blade based on the XGBoost algorithm is proposed. This method obtains the aerodynamic response of the blade through Computational Fluid Dynamics (CFD) simulation, and trains the obtained data on the XGBoost model to establish a three-dimensional compressor blade aerodynamic prediction model. In order to test the accuracy of the XGBoost model, root mean square error (RMSE) and coefficient of determination R2 are introduced as model evaluation indicators. The experimental results show that the prediction results of this model are in good agreement with the results of direct use of Computational Fluid Dynamics (CFD). This method only needs to perform an unsteady CFD calculation to obtain the aerodynamic load distribution on the blade surface at different positions, which greatly improves the calculation efficiency and facilitates the analysis of aeroelastic stability parameters in the initial stage of the compressor design. The XGBoost algorithm can provide an effective idea for providing efficient and high-precision aeroelastic analysis.

KEYWORDS

XGBoost, reduced-order model, compressor, system identification, aeroelasticity, CFD.


Sequential Minimal Optimization for One-classslab Support Vector Machine

Sourin Chakrabarti, Aashutosh Khandelwal and Prof. O.P. Vyas, IIIT Allahabad, India

ABSTRACT

One Class Slab Support Vector Machines (OCSSVM) have turned out to be better in terms of accuracy in certain classes of classification problems than the traditional SVMs and One Class SVMs or even other One class classifiers. This paper proposes a fast training method for One Class Slab SVMs using an updated Sequential Minimal Optimization (SMO) which divides the multi variable optimization problem to smaller subproblems of size two that can then be solved analytically. The results indicate that this training method scales better to large sets of training data than other Quadratic Programming (QP) solvers.

KEYWORDS

Support Vector Machine, One Class Slab Support Vector Machine, Sequential Minimal Optimization.


Non-negative Matrix Factorization of Story Watching Time of Tourists for Best Sightseeing Spot and Preference

Motoki Seguchi1, Fumiko Harada2 and Hiromitsu Shimakawa1, 1College of Information Science and Engineering, Ritsumeikan University, Kusatsu, Shiga, Japan, 2Connect Dot Ltd., Kyoto, Japan

ABSTRACT

In this research, we propose a method of recommending the best sightseeing spot through watching stories of sightseeing spots. It predicts the rating for each sightseeing spot of a target tourist based on Non-negative Matrix Factorization on the story watching times and ratings of tourists. We also propose to estimate the degree of the target tourist’s preference for a sightseeing spot. Tourists visit a sightseeing spot for a certain purpose of tourism. The preferences of tourists appear prominently in their purposes of tourism. In addition, the degree of the tourists’ preferences for sightseeing spots differs depending on the sightseeing spot. If we can estimate the degree of preference of a tourist, it will be possible to recommend a sightseeing spot that can achieve his purpose of tourism.

KEYWORDS

Sightseeing, Recommendation, Interest Estimation, Story Watching, Preference.


Deep Learning for Multiple Stages Heart Disease Prediction

Khalid Amen, Mohamed Zohdy, and Mohammed Mahmoud, Oakland University, USA

ABSTRACT

According to the Centers for Disease Control and Prevention (CDC), heart disease is the number one cause of death for men, women, and people of most racial and ethnic groups in the United States. More than one person dies every minute and nearly half a million die each year from it, costing billions of dollars annually. Previous deep learning approaches have been used to predict whether patients have heart disease. The purpose of this work is to predict the five stages of heart disease starting from no disease, stage 1, stage 2, stage 3, and advance condition or severe heart disease. We investigate different potential supervised models that are trained by deep learning algorithms and find out which of these models has better accuracy. In this paper, we describe and investigate five deep learning algorithms with hyperparameters that maximize classifier performance to show which one is the best to predict the stage at which a person is determined to have heart disease. This predication can facilitate every step of patient care, reducing the margin of error and contributing to precision medicine.

KEYWORDS

Deep Learning, Ml, Cnn, Dnn, Rnn, Jupyter, Python, Cleveland Dataset, Gradient Tree Boosting, Gtb, Random Forest, Rf, Support Vector Machine, Svm, Extra Random Forest, Erf, Logistic Regression, Lr.


Open-vocabulary Recognition of Offline Arabic Text

Zouhaira Noubigh1, Anis Mezghani2 and Monji Kherallah3, 1Higher Institute of Computer Science and Communication Technologies, University of Sousse, Tunisia, 2Higher Institute of Industrial Management, University of Sfax, Tunisia, 3Faculty of Sciences of Sfax, University of Sfax, Tunisia

ABSTRACT

The offline Arabic text recognition is a substantial problem that has several important applications. It has attracted considerable attention and has become one of the challenging areas of research in the field of document image processing. Deep Neural Networks (DNN) algorithms provide the great performance improvement in problems of sequence recognition such as speech and handwriting recognition. This paper interests on recent Arabic handwriting text recognition researches based on DNN. Our contribution in this work is based on a new model that combined CNN and BLSTM with CTC beam search used for the first time in handwriting Arabic recognition. While the proposed system is an Open-Vocabulary approach, its results outperforms other works using word lexicon or language model.

KEYWORDS

Deep learning, Handwriting Arabic text, open vocabulary, CNN, BLSTM.


Lebanonuprising: A Thorough Study of Lebanese Tweets

Reda Khalaf and Mireille Makary, Department of Computer Science and Information Technology, Lebanese International University, Beirut, Lebanon

ABSTRACT

Recent studies showed a huge interest in social networks sentiment analysis such as Twitter, to study how the users feel about a certain topic. In this paper, we conducted a sentiment analysis study for the tweets in spoken Lebanese Arabic related to the LebanonUprising hashtag )ينتفض_لبنان ,) #which was trending upon a socio-economic revolution that started in October, using different machine learning algorithms. The dataset was manually labelled to measure the precision and recall metrics and to compare between the different algorithms. Furthermore, the work completed in this paper provides two more contributions. The first is related to building a Lebanese – Modern Standard Arabic )فصحة )mapping dictionary and the second is an attempt to detect sarcastic and funny emotions in the tweets using emojis. The results we obtained seem satisfactory especially considering that there was no previous or similar work done involving Lebanese Arabic tweets, to our knowledge.

KEYWORDS

Lebanese Arabic tweets, sentiment analysis, machine learning, emotions, emojis.


Survey on Federated Learning Towards Privacy Preserving AI

Sheela Raju Kurupathi1 and Wolfgang Maass1,2, 1German Research Center for Artificial Intelligence, Saarbrücken, Germany, 2Saarland University, Saarbrücken, Germany

ABSTRACT

One of the significant challenges of Artificial Intelligence (AI) and Machine learning models is to preserve data privacy and to ensure data security. Addressing this problem lead to the application of Federated Learning (FL) mechanism towards preserving data privacy. Preserving user privacy in the European Union (EU) has to abide by the General Data Protection Regulation (GDPR). Therefore, exploring the machine learning models for preserving data privacy has to take into consideration of GDPR. In this paper, we present in detail understanding of Federated Machine Learning, various federated architectures along with different privacy-preserving mechanisms. The main goal of this survey work is to highlight the existing privacy techniques and also propose applications of Federated Learning in Industries. Finally, we also depict how Federated learning is an emerging area of future research that would bring a new era in AI and Machine learning.

KEYWORDS

Federated Learning, Artificial Intelligence, Machine Learning, Privacy, Security, Distributed Learning.


The Effective Learning Outcomes of Management Information Systems for Postgraduate and Undergraduate Students

Hlaing Htake Khaung Tin, Khin Shin Thant, Myat Mon Khine, Thet Thet Aung and Khin Lay Myint, Faculty of Information Science, University of Computer Studies, Hinthada, Myanmar

ABSTRACT

An information system can deliver the detail evidence that support to the society and business firm by decision making, controlling the operations, problems analysing, and creating new products or services. This paper presents a survey of management information system subject for computer students to improve decision making process. The main purpose of this survey is to support computer science students can support using software in real-world settings for improving decision making. Other objectives of this research (1) having a well communication with students and teachers offer them the chance to be motivated and feel engaged in the information system learning process (2) improving teaching and learning approach for MIS subject and (3) showing their abilities and participating into the classwork. This research design is a qualitative research design of open ended study questions format finished by postgraduate an undergraduate computer science students in Computer University.

KEYWORDS

Management Information Science, Computer Science Students, Decision Makin, Learning Outcomes, Investigating, Postgraduate, Undergraduate.


IoT Learning Model for Smart Universities: Architecture, Challenges, and Applications

Amr Adel, Whitecliffe College of Technology & Innovation, Auckland, New Zealand

ABSTRACT

The utilization and implementation of Internet of Things based technology in the learning environment is effective as all the activities among everyone is related to technology only. Also the adoption of the use of the technology is preferred by many. Hence, proposing a model that can regulate the various process of institution is essential as well as advantageous for the instructors as well as to the learners. Hence, the various aspect that comes up in adopting such a model is briefly described along with the flow that the model will exercise. The benefits, challenges and its application in the field of education is represented well. The proposed model can be used by organizations as a reference point to adopt Internet of Things applications and by scholars to expand, refine and evaluate research into IoT technology.

KEYWORDS

Internet of Things, Learning Model, Smart Universities, E-Learning & Classrooms.


The Principles of the Law General on the Protection of Personal Data and their Importance

Jonatas S. de Souza1, 2, Jair M, Abe1, Luiz A. de Lima1, 2 and Nilson A. de Souza2, 3, 1Graduate Program in Production Engineering - Paulista University, São Paulo, Brazil, 2National Association of Data Privacy Professionals - ANPPD - Scientific Committee, São Paulo, Brazil, 3São Paulo State Supplementary Pension Foundation– PREVCOM, São Paulo, Brazil

ABSTRACT

Rapid technological evolution and globalization have created new challenges when it comes to data processing without the consent of the data subject and the protection of personal data. The purpose of the Law General on the Protection of Data Personal - LGPD, Law No. 13,709, August 14, 2018, provides for the protection of personal data and the amendment of the Civil Framework of the Internet, Law No. 12,965, April 23, 2014, in the Article. 7 of subsection 10 and Article 16 of subsection 2. The purpose of this paper is to present the principles of the LGPD along with the principles of the General Data Protection Regulation of the European Union - GDPR, and its importance, to provide understanding, understanding of the Brazilian Law, and understanding the growing interest of internet users on the subject.

KEYWORDS

Data, Law General on the Protection of Personal Data, General Data Protection Regulation, Legal Bases.


Controlled Machine Text Generation of Football Articles

Tomasz Garbus, University of Warsaw, Poland

ABSTRACT

Among other benefits of the rapid development in deep learning, language modelling (LM) systems have excelled at producing relatively long text samples that are (almost) indistinguishable from human-written text. This work categorizes conditional text generation systems into three paradigms: generation with placeholders, prompted generation, adversarial/reinforcement learning and provides an overview of each paradigm along with experiments { either machine- or human-judged. Example corpora of football news are used to discuss how a fast, domain-specific named entity recognition (NER) system can be built without much manual labour for English and Polish. The NER module is evaluated on manually labelled texts in both languages. It is then used not only to build fine- tuning sets for the language model, but also to aid its generation procedure, resulting in samples more compliant with provided control codes. Finally, a simple tool EDGAR for prompt-driven generation is presented. Two demos are made for the reader to experiment with and compare the proposed solutions with simply finetuned GPT-2 model.

KEYWORDS

text generation, Transformer, machine.


On the Comparison of Deep Neural Networks for Document Retrieval

M. Shoaib Malik and Dagmar Waltemath, Medical Informatics Group, Institute for Community Medicine, Greifswald, Germany

ABSTRACT

A Deep Neural Network (DNN) is reported to learn more higher-level and abstract representations of the input in various areas like image processing, unsupervised feature learning, and natural language processing. Not only that, DNNs have demonstrated improved performance compared to shallower networks across a variety of pattern recognition tasks in machine learning. Recent results and usage of DNNs on web searches have transformed the search engine technology in industrial scale applications. One such example is deepgif which is a convolutional neural network based search engine for Graphics Interchange Format (GIF) images and takes natural language text as query. In this study, we developed and compared the performance of feed-forward neural network and deep architecture of recurrent neural network on the application of document retrieval. A lot of research on recurrent neural networks has been carried out in the field of natural language processing, specially on the task of language modeling. This study will first discuss the two architectural setups used to build the models and later compare their performance to answer which of the two architectures is most suited for the task of document retrieval.


Based on Local Self-attention Mechanism with CTC for ASR

Deng Huizhen and Zhang zhaogong, Computer Science Institution, Heilongjiang University of China, China

ABSTRACT

Connectionist temporal classification (CTC) [1] has been successfully applied to end-to-end speech recognition tasks, but its main body recurrent neural network makes parallelization very difficult. Since the attention mechanism [2] has shown very good performance on a series of tasks such as machine translation [3], handwriting synthesis [4], and image caption generation [5] for loop sequence generators conditioned on input data. This paper applies the attention mechanism to CTC, and proposes a connectionist temporal classification (LSA-CTC) based on the local self-attention mechanism, in which the cyclic neural network module in the traditional CTC model is replaced by the self-attention module. It shows that it is attractive and competitive in end-to-end speech recognition. The proposed mechanism is based on local self-attention, which uses a sliding mechanism to obtain acoustic features locally. This mechanism effectively models long-term scenarios by stacking multiple sliders to obtain a larger receiving field to achieve online decoding. Moreover, the CTC training joint cross-entropy criterion makes the model converge better.We have completed experiments on the AISHELL-1 dataset. The experiments show that the basic model has a lower character error rate than the existing state-of-the-art models, and the model after cross entropy has been further improved.

KEYWORDS

text generation, Transformer, machine.


The Design and Implementation of Language Learning Chatbot with XAI using Ontology and Transfer Learning

Nuobei SHI, Qin Zeng and Raymond Lee, Beijing Normal University, Hong Kong & Baptist University United International College, China

ABSTRACT

In this paper, we proposed a transfer learning-based English language learning chatbot, whose output generated by GPT-2 can be explained by corresponding ontology graph rooted by finetuning dataset. We design three levels for systematically English learning, including phonetics level for speech recognition and pronunciation correction, semantic level for specific domain conversation, and the simulation of “free-style conversation” in English - the highest level of language chatbot communication as ‘free-style conversation agent’. For academic contribution, we implement the ontology graph to explain the performance of free-style conversation, following the concept of XAI (Explainable Artificial Intelligence) to visualize the connections of neural network in bionics, and explain the output sentence from language model. From implementation perspective, our Language Learning agent integrated the mini-program in WeChat as front-end, and fine-tuned GPT-2 model of transfer learning as back-end to interpret the responses by ontology graph.

KEYWORDS

NLP-based Chatbot, Explainable Artificial Intelligence (XAI), Ontology graph, GPT-2, Transfer Learning.


Evaluationn of Company Investment value based on Machine Learning

Junfeng Hu, Xiaosa Li, Yuru Xu, Shaowu Wu and Bin Zheng, College of Mathematics and Science, Beijing University of Technology, Beijing, China

ABSTRACT

In this paper, company investment value evaluation models are established based on comprehensive company information. After data mining and extracting a set of 436 feature parameters, an optimal subset of features is obtained by dimension reduction through tree-based feature selection, followed by the 5-fold cross-validation using XGBoost and LightGBM models. The results show that the Root-Mean-Square Error (RMSE) reached 3.098 and 3.059, respectively. In order to further improve the stability and generalization capability, we use Bayesian Ridge Regression to train a stacking model based on the XGBoost and LightGBM models. The corresponding RMSE is up to 3.047. Finally, we analyze the importance of different features for the LightGBM model.

KEYWORDS

Company investment value assessment, XGBoost model, LightGBM model, Model fusion.


A Wide Band Microstrip Monopole Slot Antenna for Chipless RFID Applications

Chaker ESSID1, Hedi SAKLI2 and Nizar SAKLI3, 1Tunisia Polytechnic School, Carthage University, Marsa City, Tunisia, 2National Engineering School of Gabes, University of Gabes, Gabes, Tunisia, 34EITA Consulting, 5 Rue du Chant des Oiseaux, 78360 Montesson, France

ABSTRACT

A new design of wideband microstrip antenna, where slots are placed on structure, is proposed. Also, a newly structure of a monopole antenna based on the noched form is designed using the HFSS. It has been found that the characteristics of new microstrip antenna are comparable to the conventional patch antennas, whereas its gain, directivity, and radiating efficiency are remarkably improved which make it to be useful in RFID chipless applications. The proposed antenna operates from 11.45 to 13.28 GHz, and 14.61 to 19.55 GHz can be used in RFID chipless applications and it has a bandwidth about 677 MHz. The return loss of the proposed antenna is indeed below -10 dB. Prototype for all antennas are fabricated and measured and a good agreement between the measured and simulated results is achieved.

KEYWORDS

Slot antenna, notches, monopole antenna, chiplessRFID, multi resonant, wide band.


Quality of Service -Aware Security Framework for Mobile Ad hoc Networks using Optimized link State Routing

Thulani Phakathi, Francis Lugayizi and Michael Esiefarienrhe, Department of Computer Science, North-West University, Mafikeng, South Africa

ABSTRACT

All networks must provide an acceptable and desirable level of Quality of Service (QoS) to ensure that applications are well supported. This becomes a challenge when it comes to Mobile ad-hoc networks (MANETs). This paper presents a security framework that is QoS-aware in MANETs using a network protocol called Optimized Link State Routing Protocol (OLSR). Security & QoS targets may not necessarily be similar but this framework seeks to bridge the gap for the provision of an optimal functioning MANET. This paper presents the various security challenges, attacks, and goals in MANETs and the existing architectures or mechanisms used to combat security attacks. Additionally, this framework includes a security keying system to ascertain QoS. The keying system linked to the basic configuration of the protocol OLSR through its Multi-point Relays (MPRs) functionality. The proposed framework is one that optimizes the use of network resources and time.

KEYWORDS

Routing protocols, MANETs, Trust framework, Video streaming, QoS.


Received - Signal - Strength - Based Localization in Wireless Sensor Networks under Satellite Interference

Yuan Liu and Daoxing Guo, College of Communications Engineering, Army Engineering University of PLA, Nanjing, China

ABSTRACT

This paper investigate how to detect and locate interference sources that may interfere with satellite earth stations in the satellite-terrestrial spectrum sharing system. By deploying a distributed wireless sensor network (WSN) around the earth station, the area adjacent to the satellite earth station is monitored and the location of terrestrial interference sources is estimated by received-signal-strength (RSS). Due to the interference of satellite signals, the accuracy of traditional localization methods is greatly reduced. Due to the low-rank data received by the sensing nodes and the sparsity of satellite signals, a robust localization algorithm based on data cleansing is used to improve the localization performance. Finally, according to the proposed method, a detailed simulation is carried out to verify the feasibility of the scheme.

KEYWORDS

satellite-terrestrial spectrum sharing, received-signal-strength, wireless sensor network, data cleansing.


Detection of 3D Spatial - Temporal Spectrum Opportunity in Satellite -Terrestrial Integrated Networks

Ning Yang and Daoxing Guo, College of Communication Engineering, Army Engineering University of PLA, Nanjing, China

ABSTRACT

In this paper, we investigate the detection of 3D spectrum opportunities in the downlink sharing scenario in satellite-terrestrial integrated networks. However, most of the current researches focus on the exploration of 1D or 2D isomorphic spectrum opportunities in traditional terrestrial networks. First, we define the 3D space-time spectrum opportunities in the satellite- terrestrial integrated networks, and further divide the opportunities into three areas in the spatial domain: the communication area of primary user, the communication protection belt, and the free access area. Then, based on the proposed 3D opportunity model, we derive the closed expressions of the detection and false alarm probability at the user-level and the network-level, respectively. What’s more, we compare the detection performance of cooperative and non-cooperative sensing schemes, and further proposed a node selection scheme based on signal-to-noise ratio to solve the problem of blind cooperation in the cooperative spectrum. Finally, simulation results demonstrate the effectiveness of the proposed scheme.

KEYWORDS

Spectrum sensing, Satellite communication, Data fusion, SNR, Cooperative sensing.


Performance evaluation of Precoded Band Codes and Hamming Norm Decoders in Random Linear Network Coding

Sarra Nighaoui, Aicha Guefrachi, and Ammar Bouallegue, Sys'Com Laboratory, National Engineering School of Tunis, Tunis-1002, Tunisia

ABSTRACT

The rateless property of Random Linear Network Coding is attractive. In fact, it adapts well to channels with unknown or variable packet loss rates, which makes possible to reduce required packets retransmission with respect to other coding schemes, and subsequently minimizes the latency. However, its major disadvantage the high decoding complexity. To reduce cost, there are two solutions: the rank deficient decoding and the full rank decoding of which we find sparse network codes. In this paper, we evaluate the performance of the Precoded Band Codes and the Rank Deficient Decoding specifically the Hamming Norm Decoders in terms of packet error rate (PER) and overhead.

KEYWORDS

decoding complexity, overhead, PER.


Neurological Signals Compression and Encryption for Security Transmission Based on IOMT: A Tele-neurological Diagnosis

Azmi Shawkat Abdulbaqi1 and Ismail@Ismail Yusuf Panessai2, 1College of Computer Science & Information Technology, University of Anbar, Iraq, 2Faculty of Arts, Computing and Creative Industry, UPSI, Malaysia

ABSTRACT

A telemedicine system using communication and information technology to deliver medical signals such as neurological signals (Electroencephalography (EEG) ) for long-distance medical services has become a reality. In the mobile healthcare monitoring, it is necessary to compress these signals for the efficient use of bandwidth and securing the confidentiality of these signals, where compression is an essential tool for solving storage and transmission problems and then able to recover the original signal from the compressed signal. The aim of this manuscript is to obtain greater compression gains at a low bit rate while preserving the clinical information content and also encrypting the signal to keep it confidential from everyone, except for physicians. In the compression stage, Discrete Wavelet Transform (DWT), thresholding techniques are used. Then, Huffman Encoding (HuFE) with chaos for compression and encryption of the EEG signal is used. This manuscript discusses the quality compression of EEG signals for telemedicine applications. We calculate the total compression and reconstruction time (T), Root Mean Square (RMSE), and CR in order to evaluate the proposed system. Simulation results show that adding HuFE after the DWT algorithm gives the best performance in terms of compression ratio and complexity. It is found from the results that the quality of the reconstructed signal is preserved at a low PRD thereby yielding better compression results. Using the DWT as a lossy compression algorithm followed by the HuFE as a lossless compression algorithm gives CR = 92.9% at RMS = 0.089 and PRD=5.4131%.

KEYWORDS

Electroencephalography (EEG), Huffman Encoding (HuFE) , Smartphone Monitoring, Discrete Wavelet Transform (DWT), Huffman Decoding (HuFD).


Gaussian Blur Through Parallel Computing on Google Colaboratory

Nahla Ibrahim, Ahmed Abou ElFarag and Rania Kadry, Department of Computer Engineering, Arab Academy for Science and Technology and Maritime Transport, Alexandria, Egypt

ABSTRACT

Image convolution is one of the complex calculations used in Image processing. A 2D image convolution is presented in the Gaussian blur which is a filter used for noise reduction and has high computational requirement. Single threaded solutions cannot keep up with the performance and speed needed for image processing techniques. Parallelizing the image convolution on parallel systems enhances the performance and reduces the processing time. In this paper we compare the speed up of two parallel systems: multi-core central processing unit CPU and graphics processing unit GPU using Google Colaboratory or “colab” to run our code.

KEYWORDS

CUDA, Parallel Computing, Image Convolution, Gaussian Blur, Google Colaboratory.


Comparative Study on Eye Gaze Estimation in Visible and IR Spectrum

Susmitha Mohan and Manoj Phirke, Imaging and Robotics Lab, HCL Technologies, Bangalore, India

ABSTRACT

Eye gaze estimation aims to find the point of gaze which is basically,” where we look”. Estimating the gaze point plays an important role in many applications with varying usage. Gaze estimation is used in automotive industry to ensure safety. In the field of retail shopping and online marketing gaze estimation is used to analyse the consumer’s interest and focus. Gaze estimation is also used for psychological tests and in healthcare for diagnosing some of the neurological disorders. This also has a significant role to play in the field to entertainment. There are multiple ways by which eye gaze estimation can be done. This paper is about a comparative study done on two of the popular methods for gaze estimation using eye features. An infra-red camera is used to capture data for this study. Method 1 tracks corneal reflection centre w.r.t the pupil centre and method 2 tracks the pupil centre w.r.t the eye centre to estimate gaze.

KEYWORDS

eye gaze, pupil, iris, cornea, corneal reflection, polynomial curve fitting, infra-red.


A cases study on Maintainability of Open Source Software System Jabref

Denim Deshmukh, Ravi Theja Kataray and Tallari Rohith Girikshith

ABSTRACT

Maintainability is a major aspect of any software project; maintainability refers to the ease by which software can adopt to changes. There are various factors that affect the effort required for maintenance, in this paper we conducted a study to observe the extent up to which a metric could affect the maintainability of a software. We have considered various versions of JabRef and studied how maintainability of various packages had changed across versions. This is done using a framework called Goal Question Metric(GQM) which provides a systematic procedure to study various attributes of entities. Data of the attributes are collected using various Object-Oriented code metric tools which provides numerical data to compare the attributes between the versions. The data collected is visualized to answer the questions formulated which indeed tends to achieve the goal to identify the modules that are hard to maintain.

KEYWORDS

Size, Structure, Complexity, Maintainability, Understandability, Goal Question Metric(GQM) approach.